Test Report: Docker_Linux_crio 21652

                    
                      b9467c4b05d043dd40c691e5c40c4e59f96d3adc:2025-09-29:41683
                    
                

Test fail (18/325)

x
+
TestAddons/parallel/Ingress (158.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-850167 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-850167 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-850167 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a59938b6-9498-4320-b655-1b978d6a1978] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a59938b6-9498-4320-b655-1b978d6a1978] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.055827816s
I0929 12:29:01.191331  567516 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-850167 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.564125604s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-850167 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-850167
helpers_test.go:243: (dbg) docker inspect addons-850167:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f",
	        "Created": "2025-09-29T12:26:07.184759192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 569473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:26:07.228210785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f/hosts",
	        "LogPath": "/var/lib/docker/containers/4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f/4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f-json.log",
	        "Name": "/addons-850167",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-850167:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-850167",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b72ff6e774aba9eccc0230a69fc4918e343e583fe593aa3112b9d6b35bbd08f",
	                "LowerDir": "/var/lib/docker/overlay2/3837f9f03beba029e415d9f4a578ae08c59d8d97c0915e24b2bb848a861bc0fe-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3837f9f03beba029e415d9f4a578ae08c59d8d97c0915e24b2bb848a861bc0fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3837f9f03beba029e415d9f4a578ae08c59d8d97c0915e24b2bb848a861bc0fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3837f9f03beba029e415d9f4a578ae08c59d8d97c0915e24b2bb848a861bc0fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-850167",
	                "Source": "/var/lib/docker/volumes/addons-850167/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-850167",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-850167",
	                "name.minikube.sigs.k8s.io": "addons-850167",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "887786a250fbb3f8953acac3ece861b2737966e89d3f6e35b191b57abb6bf1e7",
	            "SandboxKey": "/var/run/docker/netns/887786a250fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-850167": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:73:59:8c:1a:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "050efb8e0937fcc48e4367cfb2b8ef011dc379f58f032e31c40740b56fe5898f",
	                    "EndpointID": "8a9305753ccf23d040aef4c97dfd9e620d1f37e532b7ace3215910bd69c8ae48",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-850167",
	                        "4b72ff6e774a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-850167 -n addons-850167
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 logs -n 25: (1.327795331s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-475224 --alsologtostderr --binary-mirror http://127.0.0.1:40411 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-475224 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │                     │
	│ delete  │ -p binary-mirror-475224                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-475224 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ addons  │ disable dashboard -p addons-850167                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-850167                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │                     │
	│ start   │ -p addons-850167 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ enable headlamp -p addons-850167 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-850167                                                                                                                                                                                                                                                                                                                                                                                           │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ ip      │ addons-850167 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ addons  │ addons-850167 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:28 UTC │ 29 Sep 25 12:28 UTC │
	│ ssh     │ addons-850167 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │                     │
	│ addons  │ addons-850167 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │ 29 Sep 25 12:29 UTC │
	│ ssh     │ addons-850167 ssh cat /opt/local-path-provisioner/pvc-0da0b4bd-5e36-413e-b23b-30cac371151a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │ 29 Sep 25 12:29 UTC │
	│ addons  │ addons-850167 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │ 29 Sep 25 12:29 UTC │
	│ addons  │ addons-850167 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │ 29 Sep 25 12:29 UTC │
	│ addons  │ addons-850167 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:29 UTC │ 29 Sep 25 12:29 UTC │
	│ ip      │ addons-850167 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-850167        │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:25:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:25:42.456899  568833 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:25:42.457073  568833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:42.457085  568833 out.go:374] Setting ErrFile to fd 2...
	I0929 12:25:42.457091  568833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:42.457296  568833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:25:42.457866  568833 out.go:368] Setting JSON to false
	I0929 12:25:42.458819  568833 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7687,"bootTime":1759141055,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:25:42.458924  568833 start.go:140] virtualization: kvm guest
	I0929 12:25:42.460892  568833 out.go:179] * [addons-850167] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:25:42.462334  568833 notify.go:220] Checking for updates...
	I0929 12:25:42.462359  568833 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:25:42.463819  568833 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:25:42.465566  568833 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:25:42.466998  568833 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:25:42.468467  568833 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:25:42.469779  568833 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:25:42.471434  568833 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:25:42.496414  568833 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:25:42.496568  568833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:42.553160  568833 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 12:25:42.542613202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:42.553281  568833 docker.go:318] overlay module found
	I0929 12:25:42.555305  568833 out.go:179] * Using the docker driver based on user configuration
	I0929 12:25:42.556843  568833 start.go:304] selected driver: docker
	I0929 12:25:42.556860  568833 start.go:924] validating driver "docker" against <nil>
	I0929 12:25:42.556872  568833 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:25:42.557497  568833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:42.615385  568833 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 12:25:42.603376024 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:42.615552  568833 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:25:42.615771  568833 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:25:42.617644  568833 out.go:179] * Using Docker driver with root privileges
	I0929 12:25:42.619284  568833 cni.go:84] Creating CNI manager for ""
	I0929 12:25:42.619363  568833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 12:25:42.619374  568833 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 12:25:42.619470  568833 start.go:348] cluster config:
	{Name:addons-850167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-850167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 12:25:42.621124  568833 out.go:179] * Starting "addons-850167" primary control-plane node in "addons-850167" cluster
	I0929 12:25:42.622866  568833 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 12:25:42.624437  568833 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:25:42.625957  568833 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:25:42.626018  568833 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 12:25:42.626034  568833 cache.go:58] Caching tarball of preloaded images
	I0929 12:25:42.626094  568833 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:25:42.626168  568833 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 12:25:42.626184  568833 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 12:25:42.626584  568833 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/config.json ...
	I0929 12:25:42.626613  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/config.json: {Name:mkcbdbe63bf19e2761bc5b03fdd42707a32028eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:25:42.644402  568833 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 12:25:42.644566  568833 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 12:25:42.644590  568833 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 12:25:42.644601  568833 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 12:25:42.644615  568833 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 12:25:42.644625  568833 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 12:25:55.427301  568833 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 12:25:55.427358  568833 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:25:55.427398  568833 start.go:360] acquireMachinesLock for addons-850167: {Name:mk793c0df280de4adb4c15c7bc605cb8f7dc5f46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:25:55.427526  568833 start.go:364] duration metric: took 102.648µs to acquireMachinesLock for "addons-850167"
	I0929 12:25:55.427564  568833 start.go:93] Provisioning new machine with config: &{Name:addons-850167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-850167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:25:55.427678  568833 start.go:125] createHost starting for "" (driver="docker")
	I0929 12:25:55.429559  568833 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 12:25:55.429902  568833 start.go:159] libmachine.API.Create for "addons-850167" (driver="docker")
	I0929 12:25:55.429944  568833 client.go:168] LocalClient.Create starting
	I0929 12:25:55.430100  568833 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem
	I0929 12:25:55.680903  568833 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem
	I0929 12:25:55.875103  568833 cli_runner.go:164] Run: docker network inspect addons-850167 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 12:25:55.893443  568833 cli_runner.go:211] docker network inspect addons-850167 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 12:25:55.893531  568833 network_create.go:284] running [docker network inspect addons-850167] to gather additional debugging logs...
	I0929 12:25:55.893554  568833 cli_runner.go:164] Run: docker network inspect addons-850167
	W0929 12:25:55.910704  568833 cli_runner.go:211] docker network inspect addons-850167 returned with exit code 1
	I0929 12:25:55.910744  568833 network_create.go:287] error running [docker network inspect addons-850167]: docker network inspect addons-850167: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-850167 not found
	I0929 12:25:55.910762  568833 network_create.go:289] output of [docker network inspect addons-850167]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-850167 not found
	
	** /stderr **
	I0929 12:25:55.910918  568833 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:25:55.928786  568833 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003b0be0}
	I0929 12:25:55.928837  568833 network_create.go:124] attempt to create docker network addons-850167 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 12:25:55.928930  568833 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-850167 addons-850167
	I0929 12:25:55.989608  568833 network_create.go:108] docker network addons-850167 192.168.49.0/24 created
	I0929 12:25:55.989639  568833 kic.go:121] calculated static IP "192.168.49.2" for the "addons-850167" container
	I0929 12:25:55.989702  568833 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 12:25:56.006900  568833 cli_runner.go:164] Run: docker volume create addons-850167 --label name.minikube.sigs.k8s.io=addons-850167 --label created_by.minikube.sigs.k8s.io=true
	I0929 12:25:56.027497  568833 oci.go:103] Successfully created a docker volume addons-850167
	I0929 12:25:56.027601  568833 cli_runner.go:164] Run: docker run --rm --name addons-850167-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850167 --entrypoint /usr/bin/test -v addons-850167:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 12:26:02.839621  568833 cli_runner.go:217] Completed: docker run --rm --name addons-850167-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850167 --entrypoint /usr/bin/test -v addons-850167:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.811959243s)
	I0929 12:26:02.839652  568833 oci.go:107] Successfully prepared a docker volume addons-850167
	I0929 12:26:02.839688  568833 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:26:02.839717  568833 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 12:26:02.839792  568833 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-850167:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 12:26:07.107497  568833 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-850167:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.267645945s)
	I0929 12:26:07.107552  568833 kic.go:203] duration metric: took 4.267832853s to extract preloaded images to volume ...
	W0929 12:26:07.107661  568833 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 12:26:07.107697  568833 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 12:26:07.107742  568833 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 12:26:07.168051  568833 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-850167 --name addons-850167 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850167 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-850167 --network addons-850167 --ip 192.168.49.2 --volume addons-850167:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 12:26:07.457228  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Running}}
	I0929 12:26:07.476672  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:07.496582  568833 cli_runner.go:164] Run: docker exec addons-850167 stat /var/lib/dpkg/alternatives/iptables
	I0929 12:26:07.551625  568833 oci.go:144] the created container "addons-850167" has a running status.
	I0929 12:26:07.551663  568833 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa...
	I0929 12:26:07.758489  568833 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 12:26:07.790306  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:07.812041  568833 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 12:26:07.812063  568833 kic_runner.go:114] Args: [docker exec --privileged addons-850167 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 12:26:07.863223  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:07.883393  568833 machine.go:93] provisionDockerMachine start ...
	I0929 12:26:07.883495  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:07.903851  568833 main.go:141] libmachine: Using SSH client type: native
	I0929 12:26:07.904248  568833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0929 12:26:07.904308  568833 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:26:08.045200  568833 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-850167
	
	I0929 12:26:08.045235  568833 ubuntu.go:182] provisioning hostname "addons-850167"
	I0929 12:26:08.045299  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:08.065686  568833 main.go:141] libmachine: Using SSH client type: native
	I0929 12:26:08.065924  568833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0929 12:26:08.065938  568833 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-850167 && echo "addons-850167" | sudo tee /etc/hostname
	I0929 12:26:08.219133  568833 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-850167
	
	I0929 12:26:08.219239  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:08.237633  568833 main.go:141] libmachine: Using SSH client type: native
	I0929 12:26:08.237924  568833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0929 12:26:08.237964  568833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-850167' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-850167/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-850167' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:26:08.374960  568833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:26:08.375002  568833 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 12:26:08.375075  568833 ubuntu.go:190] setting up certificates
	I0929 12:26:08.375096  568833 provision.go:84] configureAuth start
	I0929 12:26:08.375162  568833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850167
	I0929 12:26:08.393306  568833 provision.go:143] copyHostCerts
	I0929 12:26:08.393413  568833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 12:26:08.393552  568833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 12:26:08.393642  568833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 12:26:08.393718  568833 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.addons-850167 san=[127.0.0.1 192.168.49.2 addons-850167 localhost minikube]
	I0929 12:26:08.723052  568833 provision.go:177] copyRemoteCerts
	I0929 12:26:08.723204  568833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:26:08.723247  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:08.741216  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:08.840294  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 12:26:08.869560  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:26:08.897051  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:26:08.923092  568833 provision.go:87] duration metric: took 547.978224ms to configureAuth
	I0929 12:26:08.923123  568833 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:26:08.923324  568833 config.go:182] Loaded profile config "addons-850167": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:26:08.923450  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:08.942179  568833 main.go:141] libmachine: Using SSH client type: native
	I0929 12:26:08.942414  568833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0929 12:26:08.942436  568833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 12:26:09.193292  568833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 12:26:09.193324  568833 machine.go:96] duration metric: took 1.309908791s to provisionDockerMachine
	I0929 12:26:09.193342  568833 client.go:171] duration metric: took 13.763390228s to LocalClient.Create
	I0929 12:26:09.193384  568833 start.go:167] duration metric: took 13.76348415s to libmachine.API.Create "addons-850167"
	I0929 12:26:09.193400  568833 start.go:293] postStartSetup for "addons-850167" (driver="docker")
	I0929 12:26:09.193413  568833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:26:09.193488  568833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:26:09.193546  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:09.212556  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:09.313028  568833 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:26:09.317015  568833 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:26:09.317060  568833 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:26:09.317075  568833 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:26:09.317084  568833 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:26:09.317101  568833 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 12:26:09.317164  568833 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 12:26:09.317190  568833 start.go:296] duration metric: took 123.781697ms for postStartSetup
	I0929 12:26:09.317502  568833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850167
	I0929 12:26:09.336067  568833 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/config.json ...
	I0929 12:26:09.336349  568833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:26:09.336390  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:09.354766  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:09.448435  568833 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:26:09.453388  568833 start.go:128] duration metric: took 14.025689281s to createHost
	I0929 12:26:09.453419  568833 start.go:83] releasing machines lock for "addons-850167", held for 14.025875476s
	I0929 12:26:09.453501  568833 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850167
	I0929 12:26:09.472999  568833 ssh_runner.go:195] Run: cat /version.json
	I0929 12:26:09.473058  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:09.473076  568833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:26:09.473141  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:09.491643  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:09.491924  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:09.584495  568833 ssh_runner.go:195] Run: systemctl --version
	I0929 12:26:09.655408  568833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 12:26:09.797019  568833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:26:09.802199  568833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:26:09.826968  568833 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:26:09.827077  568833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:26:09.859462  568833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 12:26:09.859488  568833 start.go:495] detecting cgroup driver to use...
	I0929 12:26:09.859523  568833 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:26:09.859575  568833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:26:09.876721  568833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:26:09.889837  568833 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:26:09.889934  568833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:26:09.906138  568833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:26:09.922684  568833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:26:09.994088  568833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:26:10.071727  568833 docker.go:234] disabling docker service ...
	I0929 12:26:10.071805  568833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:26:10.091145  568833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:26:10.104118  568833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:26:10.172867  568833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:26:10.281006  568833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:26:10.293790  568833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:26:10.313429  568833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 12:26:10.313503  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.327386  568833 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 12:26:10.327446  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.338730  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.349990  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.361091  568833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:26:10.371385  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.383000  568833 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.400973  568833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:26:10.412531  568833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:26:10.422907  568833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:26:10.433206  568833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:26:10.542584  568833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 12:26:10.638943  568833 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 12:26:10.639056  568833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 12:26:10.643075  568833 start.go:563] Will wait 60s for crictl version
	I0929 12:26:10.643136  568833 ssh_runner.go:195] Run: which crictl
	I0929 12:26:10.646934  568833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:26:10.684268  568833 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 12:26:10.684367  568833 ssh_runner.go:195] Run: crio --version
	I0929 12:26:10.720976  568833 ssh_runner.go:195] Run: crio --version
	I0929 12:26:10.761981  568833 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 12:26:10.763123  568833 cli_runner.go:164] Run: docker network inspect addons-850167 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:26:10.780980  568833 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 12:26:10.785247  568833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:26:10.797689  568833 kubeadm.go:875] updating cluster {Name:addons-850167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-850167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:26:10.797811  568833 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:26:10.797863  568833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:26:10.869059  568833 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:26:10.869088  568833 crio.go:433] Images already preloaded, skipping extraction
	I0929 12:26:10.869153  568833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:26:10.905286  568833 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:26:10.905312  568833 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:26:10.905321  568833 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 12:26:10.905434  568833 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-850167 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-850167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:26:10.905524  568833 ssh_runner.go:195] Run: crio config
	I0929 12:26:10.950790  568833 cni.go:84] Creating CNI manager for ""
	I0929 12:26:10.950816  568833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 12:26:10.950829  568833 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:26:10.950852  568833 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-850167 NodeName:addons-850167 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:26:10.951013  568833 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-850167"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:26:10.951079  568833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:26:10.961586  568833 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:26:10.961680  568833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:26:10.971457  568833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 12:26:10.991320  568833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:26:11.014831  568833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 12:26:11.034856  568833 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:26:11.039006  568833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:26:11.051482  568833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:26:11.120829  568833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:26:11.147315  568833 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167 for IP: 192.168.49.2
	I0929 12:26:11.147345  568833 certs.go:194] generating shared ca certs ...
	I0929 12:26:11.147369  568833 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.147517  568833 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 12:26:11.309768  568833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt ...
	I0929 12:26:11.309808  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt: {Name:mk85fb19db88cb432dc2ab3074f8d28542e134e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.310066  568833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key ...
	I0929 12:26:11.310091  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key: {Name:mk11a699943f733451fbecaa0cf41d4497cdc10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.310209  568833 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 12:26:11.385476  568833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt ...
	I0929 12:26:11.385511  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt: {Name:mk367f04d4925705cb56499bbcd956367371cbcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.385719  568833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key ...
	I0929 12:26:11.385739  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key: {Name:mkffc3a241b597d20009b042688b35dd357e8387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.385844  568833 certs.go:256] generating profile certs ...
	I0929 12:26:11.385944  568833 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.key
	I0929 12:26:11.385966  568833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt with IP's: []
	I0929 12:26:11.796623  568833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt ...
	I0929 12:26:11.796662  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: {Name:mk71a02fda731b766b086a738762d8e3dcd2e2df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.796869  568833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.key ...
	I0929 12:26:11.796900  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.key: {Name:mk98ee4e18f180d62c4c3cb1e57066a8efef3b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:11.797021  568833 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key.2354af0b
	I0929 12:26:11.797047  568833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt.2354af0b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 12:26:12.355476  568833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt.2354af0b ...
	I0929 12:26:12.355518  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt.2354af0b: {Name:mkdda01cdf05a2a9260c15f263f4be66afcf3414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:12.355741  568833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key.2354af0b ...
	I0929 12:26:12.355771  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key.2354af0b: {Name:mk30e9a0efa2c867d7561bb949ec5c6c845054dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:12.355897  568833 certs.go:381] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt.2354af0b -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt
	I0929 12:26:12.356072  568833 certs.go:385] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key.2354af0b -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key
	I0929 12:26:12.356166  568833 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.key
	I0929 12:26:12.356196  568833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.crt with IP's: []
	I0929 12:26:12.745020  568833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.crt ...
	I0929 12:26:12.745059  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.crt: {Name:mk8dc7ee9d877b5d3640ad6655fc11150fbc78db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:12.745274  568833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.key ...
	I0929 12:26:12.745290  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.key: {Name:mk4bada1af2d7395cd9be5e38151745e0105d5b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:12.745518  568833 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 12:26:12.745556  568833 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:26:12.745579  568833 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:26:12.745599  568833 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 12:26:12.746287  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:26:12.773769  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:26:12.801718  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:26:12.829441  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:26:12.857812  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 12:26:12.887793  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:26:12.917590  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:26:12.946276  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:26:12.974911  568833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:26:13.005651  568833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:26:13.026339  568833 ssh_runner.go:195] Run: openssl version
	I0929 12:26:13.032737  568833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:26:13.047254  568833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:26:13.051651  568833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:26:13.051715  568833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:26:13.059039  568833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:26:13.069631  568833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:26:13.073685  568833 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:26:13.073753  568833 kubeadm.go:392] StartCluster: {Name:addons-850167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-850167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:26:13.073865  568833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:26:13.074046  568833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:26:13.112946  568833 cri.go:89] found id: ""
	I0929 12:26:13.113022  568833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:26:13.123369  568833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:26:13.134055  568833 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 12:26:13.134126  568833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:26:13.144489  568833 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:26:13.144509  568833 kubeadm.go:157] found existing configuration files:
	
	I0929 12:26:13.144556  568833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:26:13.155051  568833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:26:13.155123  568833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:26:13.165030  568833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:26:13.176373  568833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:26:13.176431  568833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:26:13.186694  568833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:26:13.197103  568833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:26:13.197182  568833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:26:13.206984  568833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:26:13.217000  568833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:26:13.217066  568833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:26:13.226638  568833 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 12:26:13.268083  568833 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 12:26:13.268145  568833 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:26:13.285230  568833 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 12:26:13.285320  568833 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 12:26:13.285358  568833 kubeadm.go:310] OS: Linux
	I0929 12:26:13.285404  568833 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 12:26:13.285500  568833 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 12:26:13.285590  568833 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 12:26:13.285678  568833 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 12:26:13.285772  568833 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 12:26:13.285856  568833 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 12:26:13.285941  568833 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 12:26:13.286012  568833 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 12:26:13.353489  568833 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:26:13.353624  568833 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:26:13.353784  568833 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 12:26:13.361292  568833 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 12:26:13.364285  568833 out.go:252]   - Generating certificates and keys ...
	I0929 12:26:13.364413  568833 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:26:13.364505  568833 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:26:13.392102  568833 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:26:13.500426  568833 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:26:13.710088  568833 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:26:13.778924  568833 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:26:14.048610  568833 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:26:14.048780  568833 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-850167 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 12:26:14.235804  568833 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:26:14.235975  568833 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-850167 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 12:26:14.445223  568833 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:26:14.545682  568833 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:26:15.177811  568833 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:26:15.177917  568833 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:26:15.243931  568833 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:26:15.299606  568833 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 12:26:15.489877  568833 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:26:15.675084  568833 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:26:15.803982  568833 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:26:15.804463  568833 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:26:15.808415  568833 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 12:26:15.810313  568833 out.go:252]   - Booting up control plane ...
	I0929 12:26:15.810451  568833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:26:15.810552  568833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:26:15.810647  568833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:26:15.820483  568833 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:26:15.820626  568833 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 12:26:15.826732  568833 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 12:26:15.826919  568833 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:26:15.826960  568833 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:26:15.906290  568833 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 12:26:15.906426  568833 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 12:26:16.907040  568833 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000944368s
	I0929 12:26:16.912591  568833 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 12:26:16.912728  568833 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 12:26:16.912898  568833 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 12:26:16.913021  568833 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 12:26:19.110610  568833 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198207377s
	I0929 12:26:19.257633  568833 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.345494737s
	I0929 12:26:20.914673  568833 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002331889s
	I0929 12:26:20.926969  568833 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:26:20.938931  568833 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:26:20.949310  568833 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:26:20.949632  568833 kubeadm.go:310] [mark-control-plane] Marking the node addons-850167 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:26:20.958743  568833 kubeadm.go:310] [bootstrap-token] Using token: jr3fv1.tlr30ejbylem9yq7
	I0929 12:26:20.960127  568833 out.go:252]   - Configuring RBAC rules ...
	I0929 12:26:20.960291  568833 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:26:20.964212  568833 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:26:20.971046  568833 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:26:20.974531  568833 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:26:20.978702  568833 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:26:20.982105  568833 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:26:21.321170  568833 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:26:21.740330  568833 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:26:22.321173  568833 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:26:22.322052  568833 kubeadm.go:310] 
	I0929 12:26:22.322159  568833 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:26:22.322173  568833 kubeadm.go:310] 
	I0929 12:26:22.322259  568833 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:26:22.322271  568833 kubeadm.go:310] 
	I0929 12:26:22.322320  568833 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:26:22.322448  568833 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:26:22.322537  568833 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:26:22.322547  568833 kubeadm.go:310] 
	I0929 12:26:22.322628  568833 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:26:22.322641  568833 kubeadm.go:310] 
	I0929 12:26:22.322701  568833 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:26:22.322710  568833 kubeadm.go:310] 
	I0929 12:26:22.322786  568833 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:26:22.322925  568833 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:26:22.323023  568833 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:26:22.323036  568833 kubeadm.go:310] 
	I0929 12:26:22.323159  568833 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:26:22.323280  568833 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:26:22.323294  568833 kubeadm.go:310] 
	I0929 12:26:22.323410  568833 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jr3fv1.tlr30ejbylem9yq7 \
	I0929 12:26:22.323561  568833 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b \
	I0929 12:26:22.323591  568833 kubeadm.go:310] 	--control-plane 
	I0929 12:26:22.323600  568833 kubeadm.go:310] 
	I0929 12:26:22.323724  568833 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:26:22.323737  568833 kubeadm.go:310] 
	I0929 12:26:22.323858  568833 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jr3fv1.tlr30ejbylem9yq7 \
	I0929 12:26:22.324032  568833 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b 
	I0929 12:26:22.326314  568833 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 12:26:22.326426  568833 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:26:22.326447  568833 cni.go:84] Creating CNI manager for ""
	I0929 12:26:22.326461  568833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 12:26:22.328215  568833 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 12:26:22.329704  568833 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 12:26:22.334334  568833 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 12:26:22.334356  568833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 12:26:22.356022  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 12:26:22.581673  568833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:26:22.581816  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:22.581821  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-850167 minikube.k8s.io/updated_at=2025_09_29T12_26_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=addons-850167 minikube.k8s.io/primary=true
	I0929 12:26:22.591511  568833 ops.go:34] apiserver oom_adj: -16
	I0929 12:26:22.670434  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:23.171336  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:23.670599  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:24.171420  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:24.671346  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:25.171570  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:25.671172  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:26.171410  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:26.671327  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:27.170828  568833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:26:27.245732  568833 kubeadm.go:1105] duration metric: took 4.663993718s to wait for elevateKubeSystemPrivileges
	I0929 12:26:27.245788  568833 kubeadm.go:394] duration metric: took 14.172040159s to StartCluster
	I0929 12:26:27.245817  568833 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:27.246037  568833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:26:27.246561  568833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:26:27.246782  568833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:26:27.246802  568833 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:26:27.246904  568833 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 12:26:27.247050  568833 addons.go:69] Setting yakd=true in profile "addons-850167"
	I0929 12:26:27.247061  568833 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-850167"
	I0929 12:26:27.247076  568833 addons.go:238] Setting addon yakd=true in "addons-850167"
	I0929 12:26:27.247080  568833 addons.go:69] Setting storage-provisioner=true in profile "addons-850167"
	I0929 12:26:27.247073  568833 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-850167"
	I0929 12:26:27.247099  568833 addons.go:238] Setting addon storage-provisioner=true in "addons-850167"
	I0929 12:26:27.247102  568833 addons.go:69] Setting gcp-auth=true in profile "addons-850167"
	I0929 12:26:27.247114  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.247131  568833 mustload.go:65] Loading cluster: addons-850167
	I0929 12:26:27.247141  568833 addons.go:69] Setting registry=true in profile "addons-850167"
	I0929 12:26:27.247152  568833 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-850167"
	I0929 12:26:27.247164  568833 addons.go:69] Setting registry-creds=true in profile "addons-850167"
	I0929 12:26:27.247166  568833 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-850167"
	I0929 12:26:27.247174  568833 addons.go:238] Setting addon registry-creds=true in "addons-850167"
	I0929 12:26:27.247194  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.247196  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.247239  568833 addons.go:69] Setting cloud-spanner=true in profile "addons-850167"
	I0929 12:26:27.247266  568833 addons.go:238] Setting addon cloud-spanner=true in "addons-850167"
	I0929 12:26:27.247292  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.247334  568833 config.go:182] Loaded profile config "addons-850167": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:26:27.247574  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247692  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247697  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247711  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247994  568833 addons.go:69] Setting volcano=true in profile "addons-850167"
	I0929 12:26:27.248026  568833 addons.go:238] Setting addon volcano=true in "addons-850167"
	I0929 12:26:27.248056  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.248223  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.248493  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.250103  568833 addons.go:69] Setting ingress-dns=true in profile "addons-850167"
	I0929 12:26:27.250124  568833 addons.go:238] Setting addon ingress-dns=true in "addons-850167"
	I0929 12:26:27.250167  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.250682  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.250990  568833 addons.go:69] Setting inspektor-gadget=true in profile "addons-850167"
	I0929 12:26:27.251013  568833 addons.go:238] Setting addon inspektor-gadget=true in "addons-850167"
	I0929 12:26:27.251047  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.251289  568833 addons.go:69] Setting metrics-server=true in profile "addons-850167"
	I0929 12:26:27.251321  568833 addons.go:238] Setting addon metrics-server=true in "addons-850167"
	I0929 12:26:27.251354  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.251551  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247092  568833 addons.go:69] Setting default-storageclass=true in profile "addons-850167"
	I0929 12:26:27.251617  568833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-850167"
	I0929 12:26:27.251691  568833 addons.go:69] Setting volumesnapshots=true in profile "addons-850167"
	I0929 12:26:27.251708  568833 addons.go:238] Setting addon volumesnapshots=true in "addons-850167"
	I0929 12:26:27.251732  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.251801  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.251940  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.252204  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247156  568833 addons.go:238] Setting addon registry=true in "addons-850167"
	I0929 12:26:27.252360  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.247062  568833 config.go:182] Loaded profile config "addons-850167": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:26:27.247132  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.252739  568833 out.go:179] * Verifying Kubernetes components...
	I0929 12:26:27.252935  568833 addons.go:69] Setting ingress=true in profile "addons-850167"
	I0929 12:26:27.252975  568833 addons.go:238] Setting addon ingress=true in "addons-850167"
	I0929 12:26:27.253017  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.253515  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.255417  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.255594  568833 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-850167"
	I0929 12:26:27.255635  568833 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-850167"
	I0929 12:26:27.255994  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247083  568833 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-850167"
	I0929 12:26:27.257261  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.257753  568833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:26:27.259466  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.247143  568833 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-850167"
	I0929 12:26:27.261583  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.261750  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.266393  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.322005  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.329093  568833 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 12:26:27.329104  568833 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 12:26:27.329840  568833 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 12:26:27.331146  568833 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 12:26:27.331168  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 12:26:27.331235  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.331511  568833 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 12:26:27.331526  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 12:26:27.331580  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.334122  568833 out.go:179]   - Using image docker.io/registry:3.0.0
	W0929 12:26:27.336348  568833 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 12:26:27.337063  568833 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 12:26:27.338789  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 12:26:27.338867  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.337329  568833 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 12:26:27.343203  568833 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 12:26:27.343236  568833 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 12:26:27.343313  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.346507  568833 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:26:27.349551  568833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:26:27.349585  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:26:27.349653  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.375718  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 12:26:27.377002  568833 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 12:26:27.377860  568833 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I0929 12:26:27.377888  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 12:26:27.377910  568833 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 12:26:27.377978  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.378241  568833 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 12:26:27.380686  568833 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 12:26:27.380741  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 12:26:27.380819  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.380692  568833 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:26:27.380938  568833 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:26:27.381009  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.381493  568833 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 12:26:27.383489  568833 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 12:26:27.384747  568833 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 12:26:27.384975  568833 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 12:26:27.384994  568833 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 12:26:27.385083  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.386743  568833 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 12:26:27.386773  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 12:26:27.386830  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.386981  568833 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-850167"
	I0929 12:26:27.387028  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.387843  568833 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 12:26:27.390193  568833 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 12:26:27.390216  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 12:26:27.390290  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.392098  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.400800  568833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:26:27.404363  568833 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 12:26:27.406100  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 12:26:27.406594  568833 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 12:26:27.406621  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 12:26:27.406687  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.409510  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 12:26:27.411442  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 12:26:27.413371  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 12:26:27.415470  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 12:26:27.416875  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 12:26:27.418292  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 12:26:27.420505  568833 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 12:26:27.422168  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.424478  568833 addons.go:238] Setting addon default-storageclass=true in "addons-850167"
	I0929 12:26:27.424531  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:27.425062  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:27.427043  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 12:26:27.427094  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 12:26:27.427190  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.429931  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.450850  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.451467  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.459332  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.466752  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.476834  568833 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 12:26:27.479364  568833 out.go:179]   - Using image docker.io/busybox:stable
	I0929 12:26:27.482235  568833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 12:26:27.482312  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 12:26:27.482848  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.493930  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.494570  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.496639  568833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:26:27.498042  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.498660  568833 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:26:27.498702  568833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:26:27.498770  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:27.501567  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.504527  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.509710  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	W0929 12:26:27.514175  568833 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 12:26:27.514217  568833 retry.go:31] will retry after 320.874681ms: ssh: handshake failed: EOF
	I0929 12:26:27.521747  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	W0929 12:26:27.523871  568833 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 12:26:27.523947  568833 retry.go:31] will retry after 305.368904ms: ssh: handshake failed: EOF
	I0929 12:26:27.544334  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	W0929 12:26:27.546639  568833 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 12:26:27.546677  568833 retry.go:31] will retry after 328.993ms: ssh: handshake failed: EOF
	I0929 12:26:27.547655  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:27.629806  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 12:26:27.649729  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 12:26:27.653155  568833 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 12:26:27.653206  568833 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 12:26:27.678054  568833 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:27.678088  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 12:26:27.681964  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 12:26:27.682414  568833 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:26:27.682438  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 12:26:27.694800  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:26:27.707662  568833 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 12:26:27.707690  568833 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 12:26:27.707871  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 12:26:27.717693  568833 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 12:26:27.717803  568833 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 12:26:27.727692  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 12:26:27.729424  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 12:26:27.735759  568833 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:26:27.735802  568833 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:26:27.736557  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:27.743715  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 12:26:27.769612  568833 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 12:26:27.769639  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 12:26:27.781740  568833 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 12:26:27.781776  568833 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 12:26:27.813158  568833 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:26:27.813188  568833 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 12:26:27.850682  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 12:26:27.857114  568833 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 12:26:27.857144  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 12:26:27.885763  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:26:27.942060  568833 node_ready.go:35] waiting up to 6m0s for node "addons-850167" to be "Ready" ...
	I0929 12:26:27.945806  568833 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 12:26:27.953769  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 12:26:28.048967  568833 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 12:26:28.049001  568833 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 12:26:28.062358  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 12:26:28.062438  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 12:26:28.090554  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:26:28.111589  568833 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 12:26:28.111643  568833 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 12:26:28.114216  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 12:26:28.114254  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 12:26:28.186786  568833 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 12:26:28.186819  568833 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 12:26:28.194755  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 12:26:28.194782  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 12:26:28.252460  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 12:26:28.252565  568833 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 12:26:28.280569  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 12:26:28.280655  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 12:26:28.342303  568833 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 12:26:28.342329  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 12:26:28.359988  568833 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 12:26:28.360025  568833 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 12:26:28.416400  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 12:26:28.417020  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 12:26:28.417040  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 12:26:28.462359  568833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-850167" context rescaled to 1 replicas
	I0929 12:26:28.468261  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 12:26:28.468367  568833 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 12:26:28.523052  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 12:26:28.523094  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 12:26:28.564295  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 12:26:28.564326  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 12:26:28.611629  568833 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 12:26:28.611668  568833 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 12:26:28.648355  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 12:26:28.978850  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.24225556s)
	W0929 12:26:28.978922  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:28.978936  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.249448344s)
	I0929 12:26:28.978976  568833 addons.go:479] Verifying addon ingress=true in "addons-850167"
	I0929 12:26:28.979005  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.235258942s)
	I0929 12:26:28.978950  568833 retry.go:31] will retry after 170.550873ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:28.979068  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.128327431s)
	I0929 12:26:28.979136  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.093343003s)
	I0929 12:26:28.979147  568833 addons.go:479] Verifying addon registry=true in "addons-850167"
	I0929 12:26:28.979156  568833 addons.go:479] Verifying addon metrics-server=true in "addons-850167"
	I0929 12:26:28.979208  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.025328951s)
	I0929 12:26:28.981176  568833 out.go:179] * Verifying ingress addon...
	I0929 12:26:28.981182  568833 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-850167 service yakd-dashboard -n yakd-dashboard
	
	I0929 12:26:28.981176  568833 out.go:179] * Verifying registry addon...
	I0929 12:26:28.983916  568833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 12:26:28.983999  568833 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 12:26:28.990559  568833 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 12:26:28.990587  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 12:26:28.990687  568833 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 12:26:28.990729  568833 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 12:26:28.990745  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:29.150796  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:29.488626  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:29.489118  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:29.596860  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.180406777s)
	W0929 12:26:29.596931  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 12:26:29.596957  568833 retry.go:31] will retry after 334.391112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 12:26:29.597093  568833 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-850167"
	I0929 12:26:29.599282  568833 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 12:26:29.603113  568833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 12:26:29.606416  568833 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 12:26:29.606440  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 12:26:29.835593  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:29.835627  568833 retry.go:31] will retry after 439.842593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:29.931942  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 12:26:29.945506  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:29.988077  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:29.988281  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:30.106354  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:30.276630  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:30.487661  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:30.487899  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:30.607614  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:30.987938  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:30.988172  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:31.107202  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:31.487898  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:31.488132  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:31.606072  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:31.987783  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:31.987943  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:32.107232  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:32.440446  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.508402635s)
	I0929 12:26:32.440506  568833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.163838916s)
	W0929 12:26:32.440546  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:32.440574  568833 retry.go:31] will retry after 393.696553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:26:32.445305  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:32.487684  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:32.487806  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:32.607132  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:32.835222  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:32.987772  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:32.987992  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:33.107749  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 12:26:33.403766  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:33.403811  568833 retry.go:31] will retry after 595.709137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:33.487924  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:33.488095  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:33.606551  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:33.987455  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:33.987637  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:33.999646  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:34.106338  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:34.487769  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:34.488258  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 12:26:34.560190  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:34.560231  568833 retry.go:31] will retry after 1.348270277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:34.606279  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 12:26:34.945056  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:34.946318  568833 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 12:26:34.946400  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:34.965323  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:34.988025  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:34.988359  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:35.075835  568833 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 12:26:35.096813  568833 addons.go:238] Setting addon gcp-auth=true in "addons-850167"
	I0929 12:26:35.096912  568833 host.go:66] Checking if "addons-850167" exists ...
	I0929 12:26:35.097292  568833 cli_runner.go:164] Run: docker container inspect addons-850167 --format={{.State.Status}}
	I0929 12:26:35.106544  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:35.116115  568833 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 12:26:35.116200  568833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850167
	I0929 12:26:35.134662  568833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/addons-850167/id_rsa Username:docker}
	I0929 12:26:35.230325  568833 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 12:26:35.231834  568833 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 12:26:35.233113  568833 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 12:26:35.233129  568833 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 12:26:35.253400  568833 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 12:26:35.253435  568833 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 12:26:35.273241  568833 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 12:26:35.273267  568833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 12:26:35.293200  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 12:26:35.487560  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:35.487825  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:35.607587  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:35.622427  568833 addons.go:479] Verifying addon gcp-auth=true in "addons-850167"
	I0929 12:26:35.624789  568833 out.go:179] * Verifying gcp-auth addon...
	I0929 12:26:35.627470  568833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 12:26:35.707335  568833 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 12:26:35.707368  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:35.909536  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:35.987347  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:35.987645  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:36.106590  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:36.135190  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:36.477319  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:36.477356  568833 retry.go:31] will retry after 2.519184404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:36.487353  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:36.487576  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:36.607017  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:36.630660  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:36.945916  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:36.988207  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:36.988460  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:37.106177  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:37.131552  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:37.487098  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:37.487243  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:37.606312  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:37.631321  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:37.987315  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:37.987587  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:38.106619  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:38.130560  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:38.487564  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:38.487632  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:38.607189  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:38.631159  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:38.987328  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:38.987420  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:38.997434  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:39.107556  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:39.131871  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:39.445194  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:39.487362  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:39.487550  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 12:26:39.561083  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:39.561122  568833 retry.go:31] will retry after 3.470093605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:39.607363  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:39.631247  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:39.987653  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:39.987814  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:40.106611  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:40.130671  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:40.487356  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:40.487476  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:40.606671  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:40.630507  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:40.987829  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:40.987865  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:41.106945  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:41.131432  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:41.445540  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:41.487524  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:41.487566  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:41.606672  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:41.630595  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:41.988040  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:41.988253  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:42.107458  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:42.131800  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:42.487586  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:42.488075  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:42.607165  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:42.631209  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:42.987290  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:42.987366  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:43.031374  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:43.107060  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:43.131359  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:43.446469  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:43.488460  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:43.488619  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 12:26:43.588609  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:43.588643  568833 retry.go:31] will retry after 3.995061352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:43.606819  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:43.630794  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:43.987203  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:43.987443  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:44.106189  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:44.131563  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:44.487276  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:44.487484  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:44.606796  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:44.707968  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:44.988301  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:44.988365  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:45.107408  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:45.131657  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:45.487592  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:45.487869  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:45.606787  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:45.631011  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:45.946242  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:45.987793  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:45.988106  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:46.107133  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:46.131175  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:46.487640  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:46.487790  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:46.607280  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:46.631394  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:46.987512  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:46.987590  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:47.106702  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:47.130870  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:47.487371  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:47.487549  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:47.584642  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:26:47.606366  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:47.631370  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:47.988138  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:47.988636  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:48.106673  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:48.130745  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:48.145325  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:48.145367  568833 retry.go:31] will retry after 4.626908235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:26:48.445793  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:48.488051  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:48.488293  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:48.607488  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:48.631386  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:48.987928  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:48.988088  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:49.107150  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:49.131555  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:49.488323  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:49.488373  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:49.607250  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:49.631409  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:49.988555  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:49.988628  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:50.106876  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:50.131002  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:50.487974  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:50.488011  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:50.607332  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:50.631409  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:50.945592  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:50.988057  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:50.988300  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:51.107238  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:51.131468  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:51.487985  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:51.487998  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:51.606925  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:51.631188  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:51.987156  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:51.987306  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:52.106492  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:52.131785  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:52.487759  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:52.487993  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:52.607145  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:52.631234  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:52.773375  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 12:26:52.945660  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:52.987942  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:52.988053  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:53.106811  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:53.131150  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:53.329316  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:53.329344  568833 retry.go:31] will retry after 10.650007998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:26:53.487546  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:53.487608  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:53.606604  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:53.630589  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:53.987805  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:53.987862  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:54.106751  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:54.130650  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:54.487749  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:54.487762  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:54.607030  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:54.631113  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:54.987521  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:54.987521  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:55.106576  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:55.130768  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:55.445825  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:55.488152  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:55.488213  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:55.606983  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:55.631016  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:55.987727  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:55.987794  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:56.106768  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:56.130711  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:56.487763  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:56.487868  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:56.607156  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:56.631354  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:56.987605  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:56.987909  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:57.106877  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:57.131535  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:57.445913  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:57.487263  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:57.487451  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:57.606365  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:57.631227  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:57.987454  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:57.987506  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:58.106763  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:58.130850  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:58.487418  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:58.487464  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:58.606398  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:58.631637  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:58.987639  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:58.987776  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:59.107062  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:59.131128  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:26:59.446008  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:26:59.488402  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:59.488450  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:26:59.606394  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:26:59.631385  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:26:59.988185  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:26:59.988208  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:00.107168  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:00.131204  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:00.488128  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:00.488294  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:00.606368  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:00.631442  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:00.987424  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:00.987472  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:01.106551  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:01.131918  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:27:01.446287  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:27:01.487407  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:01.487704  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:01.606385  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:01.631345  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:01.987471  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:01.987535  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:02.106618  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:02.131411  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:02.486916  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:02.487057  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:02.607497  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:02.631535  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:02.987520  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:02.987622  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:03.106598  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:03.130669  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:03.487871  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:03.487904  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:03.606928  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:03.631441  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:27:03.945834  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:27:03.980064  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:27:03.987346  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:03.987517  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:04.106379  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:04.131367  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:04.487835  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:04.488142  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 12:27:04.545763  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:27:04.545797  568833 retry.go:31] will retry after 21.49981392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:27:04.606868  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:04.630997  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:04.987904  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:04.988132  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:05.106841  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:05.130816  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:05.487431  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:05.487708  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:05.606485  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:05.631292  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:05.987728  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:05.987747  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:06.106683  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:06.130545  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 12:27:06.445633  568833 node_ready.go:57] node "addons-850167" has "Ready":"False" status (will retry)
	I0929 12:27:06.487548  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:06.487696  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:06.606704  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:06.630647  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:06.987722  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:06.987924  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:07.107066  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:07.131241  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:07.488138  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:07.488210  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:07.607125  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:07.631330  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:07.987610  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:07.987779  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:08.106839  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:08.131040  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:08.487755  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:08.487843  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:08.606732  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:08.630529  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:08.945936  568833 node_ready.go:49] node "addons-850167" is "Ready"
	I0929 12:27:08.945979  568833 node_ready.go:38] duration metric: took 41.00386143s for node "addons-850167" to be "Ready" ...
	I0929 12:27:08.946012  568833 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:27:08.946081  568833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:27:08.962050  568833 api_server.go:72] duration metric: took 41.715210071s to wait for apiserver process to appear ...
	I0929 12:27:08.962082  568833 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:27:08.962103  568833 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 12:27:08.966382  568833 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 12:27:08.967520  568833 api_server.go:141] control plane version: v1.34.0
	I0929 12:27:08.967548  568833 api_server.go:131] duration metric: took 5.459349ms to wait for apiserver health ...
	I0929 12:27:08.967560  568833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:27:08.971716  568833 system_pods.go:59] 20 kube-system pods found
	I0929 12:27:08.971771  568833 system_pods.go:61] "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 12:27:08.971786  568833 system_pods.go:61] "coredns-66bc5c9577-cwwm4" [17ab74e7-863a-4c25-aa46-347be746e1b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:27:08.971797  568833 system_pods.go:61] "csi-hostpath-attacher-0" [79e53a45-67c1-440a-b1a0-84091a51e3ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 12:27:08.971805  568833 system_pods.go:61] "csi-hostpath-resizer-0" [1c940ee0-28e7-45a9-a142-0936a6963dd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 12:27:08.971828  568833 system_pods.go:61] "csi-hostpathplugin-f9nhr" [2533602a-c8e6-40ea-a0a6-1683e9476efa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 12:27:08.971840  568833 system_pods.go:61] "etcd-addons-850167" [f964aac3-4b7f-4bd6-ae14-e9b2c169cb37] Running
	I0929 12:27:08.971847  568833 system_pods.go:61] "kindnet-8zmwn" [06c2585a-599e-4366-9428-062c976ecc21] Running
	I0929 12:27:08.971856  568833 system_pods.go:61] "kube-apiserver-addons-850167" [72cb8cd8-83b9-4246-8658-fcea71a25f2c] Running
	I0929 12:27:08.971863  568833 system_pods.go:61] "kube-controller-manager-addons-850167" [6373e05a-1129-42e1-8b95-7f0014ee16cc] Running
	I0929 12:27:08.971875  568833 system_pods.go:61] "kube-ingress-dns-minikube" [79c4d828-8bde-45c2-808a-f9b497b8da04] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 12:27:08.971946  568833 system_pods.go:61] "kube-proxy-m2q9m" [cc96fe8e-866b-48af-a641-6be372ccdd9d] Running
	I0929 12:27:08.971993  568833 system_pods.go:61] "kube-scheduler-addons-850167" [79475e2f-ca30-430f-9d8e-3cb3a824406a] Running
	I0929 12:27:08.972014  568833 system_pods.go:61] "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:27:08.972031  568833 system_pods.go:61] "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 12:27:08.972044  568833 system_pods.go:61] "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 12:27:08.972060  568833 system_pods.go:61] "registry-creds-764b6fb674-ffdgx" [49f0bb84-e008-4a53-a094-863de4788c7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 12:27:08.972071  568833 system_pods.go:61] "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 12:27:08.972080  568833 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8kz4d" [d3f2f572-c77e-4639-981f-b309c9b74f0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:08.972093  568833 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zj822" [1c03eb7f-4bd2-42a8-b9be-f7464b5afea5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:08.972105  568833 system_pods.go:61] "storage-provisioner" [fec49ea3-b727-489d-a94f-a3969a6ae23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:27:08.972118  568833 system_pods.go:74] duration metric: took 4.54888ms to wait for pod list to return data ...
	I0929 12:27:08.972134  568833 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:27:08.976638  568833 default_sa.go:45] found service account: "default"
	I0929 12:27:08.976986  568833 default_sa.go:55] duration metric: took 4.83368ms for default service account to be created ...
	I0929 12:27:08.977028  568833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:27:08.983817  568833 system_pods.go:86] 20 kube-system pods found
	I0929 12:27:08.983851  568833 system_pods.go:89] "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 12:27:08.983858  568833 system_pods.go:89] "coredns-66bc5c9577-cwwm4" [17ab74e7-863a-4c25-aa46-347be746e1b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:27:08.983865  568833 system_pods.go:89] "csi-hostpath-attacher-0" [79e53a45-67c1-440a-b1a0-84091a51e3ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 12:27:08.983871  568833 system_pods.go:89] "csi-hostpath-resizer-0" [1c940ee0-28e7-45a9-a142-0936a6963dd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 12:27:08.983877  568833 system_pods.go:89] "csi-hostpathplugin-f9nhr" [2533602a-c8e6-40ea-a0a6-1683e9476efa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 12:27:08.983894  568833 system_pods.go:89] "etcd-addons-850167" [f964aac3-4b7f-4bd6-ae14-e9b2c169cb37] Running
	I0929 12:27:08.983901  568833 system_pods.go:89] "kindnet-8zmwn" [06c2585a-599e-4366-9428-062c976ecc21] Running
	I0929 12:27:08.983910  568833 system_pods.go:89] "kube-apiserver-addons-850167" [72cb8cd8-83b9-4246-8658-fcea71a25f2c] Running
	I0929 12:27:08.983915  568833 system_pods.go:89] "kube-controller-manager-addons-850167" [6373e05a-1129-42e1-8b95-7f0014ee16cc] Running
	I0929 12:27:08.983929  568833 system_pods.go:89] "kube-ingress-dns-minikube" [79c4d828-8bde-45c2-808a-f9b497b8da04] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 12:27:08.983933  568833 system_pods.go:89] "kube-proxy-m2q9m" [cc96fe8e-866b-48af-a641-6be372ccdd9d] Running
	I0929 12:27:08.983937  568833 system_pods.go:89] "kube-scheduler-addons-850167" [79475e2f-ca30-430f-9d8e-3cb3a824406a] Running
	I0929 12:27:08.983941  568833 system_pods.go:89] "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:27:08.983947  568833 system_pods.go:89] "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 12:27:08.983952  568833 system_pods.go:89] "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 12:27:08.983957  568833 system_pods.go:89] "registry-creds-764b6fb674-ffdgx" [49f0bb84-e008-4a53-a094-863de4788c7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 12:27:08.983962  568833 system_pods.go:89] "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 12:27:08.983967  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8kz4d" [d3f2f572-c77e-4639-981f-b309c9b74f0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:08.983973  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj822" [1c03eb7f-4bd2-42a8-b9be-f7464b5afea5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:08.983978  568833 system_pods.go:89] "storage-provisioner" [fec49ea3-b727-489d-a94f-a3969a6ae23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:27:08.983993  568833 retry.go:31] will retry after 293.980151ms: missing components: kube-dns
	I0929 12:27:08.986456  568833 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 12:27:08.986481  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:08.986620  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:09.107816  568833 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 12:27:09.107845  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:09.130607  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:09.286138  568833 system_pods.go:86] 20 kube-system pods found
	I0929 12:27:09.286191  568833 system_pods.go:89] "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 12:27:09.286205  568833 system_pods.go:89] "coredns-66bc5c9577-cwwm4" [17ab74e7-863a-4c25-aa46-347be746e1b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:27:09.286214  568833 system_pods.go:89] "csi-hostpath-attacher-0" [79e53a45-67c1-440a-b1a0-84091a51e3ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 12:27:09.286222  568833 system_pods.go:89] "csi-hostpath-resizer-0" [1c940ee0-28e7-45a9-a142-0936a6963dd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 12:27:09.286231  568833 system_pods.go:89] "csi-hostpathplugin-f9nhr" [2533602a-c8e6-40ea-a0a6-1683e9476efa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 12:27:09.286237  568833 system_pods.go:89] "etcd-addons-850167" [f964aac3-4b7f-4bd6-ae14-e9b2c169cb37] Running
	I0929 12:27:09.286244  568833 system_pods.go:89] "kindnet-8zmwn" [06c2585a-599e-4366-9428-062c976ecc21] Running
	I0929 12:27:09.286250  568833 system_pods.go:89] "kube-apiserver-addons-850167" [72cb8cd8-83b9-4246-8658-fcea71a25f2c] Running
	I0929 12:27:09.286255  568833 system_pods.go:89] "kube-controller-manager-addons-850167" [6373e05a-1129-42e1-8b95-7f0014ee16cc] Running
	I0929 12:27:09.286265  568833 system_pods.go:89] "kube-ingress-dns-minikube" [79c4d828-8bde-45c2-808a-f9b497b8da04] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 12:27:09.286270  568833 system_pods.go:89] "kube-proxy-m2q9m" [cc96fe8e-866b-48af-a641-6be372ccdd9d] Running
	I0929 12:27:09.286276  568833 system_pods.go:89] "kube-scheduler-addons-850167" [79475e2f-ca30-430f-9d8e-3cb3a824406a] Running
	I0929 12:27:09.286287  568833 system_pods.go:89] "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:27:09.286297  568833 system_pods.go:89] "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 12:27:09.286305  568833 system_pods.go:89] "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 12:27:09.286313  568833 system_pods.go:89] "registry-creds-764b6fb674-ffdgx" [49f0bb84-e008-4a53-a094-863de4788c7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 12:27:09.286320  568833 system_pods.go:89] "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 12:27:09.286332  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8kz4d" [d3f2f572-c77e-4639-981f-b309c9b74f0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:09.286341  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj822" [1c03eb7f-4bd2-42a8-b9be-f7464b5afea5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:09.286348  568833 system_pods.go:89] "storage-provisioner" [fec49ea3-b727-489d-a94f-a3969a6ae23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:27:09.286371  568833 retry.go:31] will retry after 325.676251ms: missing components: kube-dns
	I0929 12:27:09.488644  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:09.488693  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:09.607090  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:09.616958  568833 system_pods.go:86] 20 kube-system pods found
	I0929 12:27:09.617000  568833 system_pods.go:89] "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 12:27:09.617017  568833 system_pods.go:89] "coredns-66bc5c9577-cwwm4" [17ab74e7-863a-4c25-aa46-347be746e1b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:27:09.617026  568833 system_pods.go:89] "csi-hostpath-attacher-0" [79e53a45-67c1-440a-b1a0-84091a51e3ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 12:27:09.617031  568833 system_pods.go:89] "csi-hostpath-resizer-0" [1c940ee0-28e7-45a9-a142-0936a6963dd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 12:27:09.617037  568833 system_pods.go:89] "csi-hostpathplugin-f9nhr" [2533602a-c8e6-40ea-a0a6-1683e9476efa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 12:27:09.617043  568833 system_pods.go:89] "etcd-addons-850167" [f964aac3-4b7f-4bd6-ae14-e9b2c169cb37] Running
	I0929 12:27:09.617048  568833 system_pods.go:89] "kindnet-8zmwn" [06c2585a-599e-4366-9428-062c976ecc21] Running
	I0929 12:27:09.617055  568833 system_pods.go:89] "kube-apiserver-addons-850167" [72cb8cd8-83b9-4246-8658-fcea71a25f2c] Running
	I0929 12:27:09.617059  568833 system_pods.go:89] "kube-controller-manager-addons-850167" [6373e05a-1129-42e1-8b95-7f0014ee16cc] Running
	I0929 12:27:09.617067  568833 system_pods.go:89] "kube-ingress-dns-minikube" [79c4d828-8bde-45c2-808a-f9b497b8da04] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 12:27:09.617071  568833 system_pods.go:89] "kube-proxy-m2q9m" [cc96fe8e-866b-48af-a641-6be372ccdd9d] Running
	I0929 12:27:09.617078  568833 system_pods.go:89] "kube-scheduler-addons-850167" [79475e2f-ca30-430f-9d8e-3cb3a824406a] Running
	I0929 12:27:09.617083  568833 system_pods.go:89] "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:27:09.617089  568833 system_pods.go:89] "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 12:27:09.617095  568833 system_pods.go:89] "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 12:27:09.617103  568833 system_pods.go:89] "registry-creds-764b6fb674-ffdgx" [49f0bb84-e008-4a53-a094-863de4788c7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 12:27:09.617109  568833 system_pods.go:89] "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 12:27:09.617114  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8kz4d" [d3f2f572-c77e-4639-981f-b309c9b74f0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:09.617125  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj822" [1c03eb7f-4bd2-42a8-b9be-f7464b5afea5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:09.617130  568833 system_pods.go:89] "storage-provisioner" [fec49ea3-b727-489d-a94f-a3969a6ae23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:27:09.617146  568833 retry.go:31] will retry after 461.574921ms: missing components: kube-dns
	I0929 12:27:09.631086  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:09.988011  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:09.988283  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:10.083511  568833 system_pods.go:86] 20 kube-system pods found
	I0929 12:27:10.083548  568833 system_pods.go:89] "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 12:27:10.083554  568833 system_pods.go:89] "coredns-66bc5c9577-cwwm4" [17ab74e7-863a-4c25-aa46-347be746e1b7] Running
	I0929 12:27:10.083563  568833 system_pods.go:89] "csi-hostpath-attacher-0" [79e53a45-67c1-440a-b1a0-84091a51e3ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 12:27:10.083570  568833 system_pods.go:89] "csi-hostpath-resizer-0" [1c940ee0-28e7-45a9-a142-0936a6963dd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 12:27:10.083576  568833 system_pods.go:89] "csi-hostpathplugin-f9nhr" [2533602a-c8e6-40ea-a0a6-1683e9476efa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 12:27:10.083580  568833 system_pods.go:89] "etcd-addons-850167" [f964aac3-4b7f-4bd6-ae14-e9b2c169cb37] Running
	I0929 12:27:10.083584  568833 system_pods.go:89] "kindnet-8zmwn" [06c2585a-599e-4366-9428-062c976ecc21] Running
	I0929 12:27:10.083588  568833 system_pods.go:89] "kube-apiserver-addons-850167" [72cb8cd8-83b9-4246-8658-fcea71a25f2c] Running
	I0929 12:27:10.083592  568833 system_pods.go:89] "kube-controller-manager-addons-850167" [6373e05a-1129-42e1-8b95-7f0014ee16cc] Running
	I0929 12:27:10.083598  568833 system_pods.go:89] "kube-ingress-dns-minikube" [79c4d828-8bde-45c2-808a-f9b497b8da04] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 12:27:10.083605  568833 system_pods.go:89] "kube-proxy-m2q9m" [cc96fe8e-866b-48af-a641-6be372ccdd9d] Running
	I0929 12:27:10.083609  568833 system_pods.go:89] "kube-scheduler-addons-850167" [79475e2f-ca30-430f-9d8e-3cb3a824406a] Running
	I0929 12:27:10.083613  568833 system_pods.go:89] "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:27:10.083624  568833 system_pods.go:89] "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 12:27:10.083634  568833 system_pods.go:89] "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 12:27:10.083641  568833 system_pods.go:89] "registry-creds-764b6fb674-ffdgx" [49f0bb84-e008-4a53-a094-863de4788c7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 12:27:10.083646  568833 system_pods.go:89] "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 12:27:10.083654  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8kz4d" [d3f2f572-c77e-4639-981f-b309c9b74f0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:10.083664  568833 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj822" [1c03eb7f-4bd2-42a8-b9be-f7464b5afea5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 12:27:10.083668  568833 system_pods.go:89] "storage-provisioner" [fec49ea3-b727-489d-a94f-a3969a6ae23d] Running
	I0929 12:27:10.083678  568833 system_pods.go:126] duration metric: took 1.106640172s to wait for k8s-apps to be running ...
	I0929 12:27:10.083688  568833 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:27:10.083734  568833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:27:10.097516  568833 system_svc.go:56] duration metric: took 13.81819ms WaitForService to wait for kubelet
	I0929 12:27:10.097544  568833 kubeadm.go:578] duration metric: took 42.850709097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:27:10.097565  568833 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:27:10.100599  568833 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:27:10.100627  568833 node_conditions.go:123] node cpu capacity is 8
	I0929 12:27:10.100653  568833 node_conditions.go:105] duration metric: took 3.083611ms to run NodePressure ...
	I0929 12:27:10.100665  568833 start.go:241] waiting for startup goroutines ...
	I0929 12:27:10.106501  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:10.131376  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:10.488046  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:10.488142  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:10.607724  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:10.631198  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:10.988028  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:10.988067  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:11.107195  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:11.131246  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:11.488250  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:11.488261  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:11.606578  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:11.631843  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:11.988051  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:11.988211  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:12.107382  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:12.131256  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:12.488398  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:12.488494  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:12.606963  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:12.630722  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:12.987375  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:12.987431  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:13.106635  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:13.130643  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:13.488576  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:13.488777  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:13.606974  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:13.630909  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:13.987558  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:13.987596  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:14.106584  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:14.131463  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:14.488321  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:14.488511  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:14.607277  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:14.631226  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:14.987829  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:14.988091  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:15.107203  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:15.131207  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:15.488167  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:15.488169  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:15.607796  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:15.630732  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:15.988071  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:15.988154  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:16.107553  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:16.131418  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:16.488034  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:16.488095  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:16.607298  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:16.630985  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:16.987946  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:16.987979  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:17.107542  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:17.131612  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:17.487731  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:17.487869  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:17.607245  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:17.630766  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:17.988382  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:17.988395  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:18.106579  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:18.131574  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:18.488058  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:18.488187  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:18.607183  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:18.631477  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:18.988416  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:18.988573  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:19.107210  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:19.131238  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:19.488473  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:19.488661  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:19.606695  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:19.630351  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:19.988059  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:19.988156  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:20.107729  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:20.131524  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:20.488530  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:20.488649  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:20.607272  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:20.631374  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:20.988154  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:20.988215  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:21.110071  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:21.132830  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:21.489062  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:21.489230  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:21.607381  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:21.631265  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:21.987463  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:21.987500  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:22.106922  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:22.131218  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:22.488102  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:22.488181  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:22.607632  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:22.631668  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:22.988346  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:22.988375  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:23.106529  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:23.131482  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:23.488542  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:23.488560  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:23.607225  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:23.631124  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:23.987706  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:23.987768  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:24.106814  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:24.130930  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:24.487991  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:24.488116  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:24.607402  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:24.631350  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:25.000391  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:25.000484  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:25.106594  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:25.131703  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:25.488464  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:25.488548  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:25.606646  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:25.631299  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:25.987692  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:25.987873  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:26.046804  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:27:26.108451  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:26.131370  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:26.488052  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:26.488137  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:26.606786  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 12:27:26.619061  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:27:26.619102  568833 retry.go:31] will retry after 30.791534083s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:27:26.630966  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:26.988240  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:26.988295  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:27.106852  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:27.131295  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:27.488200  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:27.488241  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:27.607695  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:27.631875  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:27.987573  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:27.987636  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:28.107853  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:28.132082  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:28.488337  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:28.488522  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:28.607041  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:28.631060  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:28.988781  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:28.988860  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:29.106731  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:29.130675  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:29.487156  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:29.487193  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:29.606247  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:29.631530  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:29.988428  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:29.988468  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:30.106558  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:30.131510  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:30.488960  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:30.489057  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:30.606991  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:30.630928  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:30.987743  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:30.987836  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:31.107067  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:31.131043  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:31.487748  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:31.487787  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:31.607249  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:31.631203  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:31.989411  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:31.989774  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:32.107352  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:32.131262  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:32.488027  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:32.488169  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:32.607597  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:32.631646  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:32.988371  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:32.988414  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:33.107486  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:33.131608  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:33.488786  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:33.488825  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:33.607698  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:33.631480  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:33.988084  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:33.988130  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:34.107708  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:34.132169  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:34.487792  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:34.487989  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:34.606796  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:34.701061  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:34.988136  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:34.988132  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:35.107029  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:35.130960  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:35.487216  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:35.487254  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:35.610294  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:35.631376  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:35.988157  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:35.988237  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:36.107076  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:36.131061  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:36.487896  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:36.487918  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:36.607261  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:36.631562  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:36.988411  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:36.989157  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:37.107374  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:37.131760  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:37.487859  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:37.487866  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:37.607913  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:37.631071  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:37.987557  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:37.987695  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:38.108080  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:38.131236  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:38.487925  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:38.487957  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:38.607385  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:38.631310  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:38.987871  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:38.987919  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:39.107342  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:39.131710  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:39.487350  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:39.487400  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:39.606993  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:39.631094  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:39.988568  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:39.988666  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:40.107387  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:40.131305  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:40.487956  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:40.488065  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:40.607942  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:40.708141  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:40.987629  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:40.987762  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:41.106676  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:41.130637  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:41.490020  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:41.490058  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:41.608953  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:41.630968  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:41.993310  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:41.993936  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:42.107706  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:42.131351  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:42.488153  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:42.488182  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:42.607504  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:42.631651  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:42.988776  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:42.988800  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:43.107311  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:43.131605  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:43.488581  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:43.488850  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:43.607157  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:43.632299  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:43.987638  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:43.987702  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:44.107142  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:44.131035  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:44.488347  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:44.488389  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:44.606896  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:44.630674  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:44.987933  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:44.987985  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:45.107250  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:45.131433  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:45.488437  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:45.488516  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:45.606776  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:45.630683  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:45.988167  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:45.988210  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:46.106401  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:46.131580  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:46.488039  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:46.488086  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:46.607363  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:46.631565  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:46.988047  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:46.988127  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:47.107997  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:47.130849  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:47.487814  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:47.487855  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:47.607628  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:47.631279  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:47.988238  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:47.988317  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:48.107778  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:48.131506  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:48.488171  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:48.488287  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:48.607400  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:48.631327  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:48.987995  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:48.988062  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:49.111288  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:49.208482  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:49.488144  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:49.488275  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:49.606533  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:49.631514  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:49.988629  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:49.988673  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:50.107050  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:50.130864  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:50.487654  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:50.487695  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:50.606760  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:50.630603  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:50.988667  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:50.988668  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:51.107354  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:51.131542  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:51.488438  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:51.488442  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:51.606781  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:51.630559  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:51.988186  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:51.988239  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:52.106682  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:52.130609  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:52.487408  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:52.487408  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:52.607049  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:52.630849  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:52.988516  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:52.988512  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:53.107117  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:53.131605  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:53.489029  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:53.489063  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:53.607512  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:53.631559  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:53.987993  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:53.988131  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:54.107357  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:54.131950  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:54.488104  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 12:27:54.488328  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:54.607221  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:54.708025  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:54.992641  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:54.993656  568833 kapi.go:107] duration metric: took 1m26.009738293s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 12:27:55.107961  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:55.131279  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:55.489800  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:55.608641  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:55.631789  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:55.988772  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:56.107501  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:56.131777  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:56.488407  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:56.606788  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:56.630996  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:56.988536  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:57.106738  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:57.130725  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:57.411159  568833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 12:27:57.487664  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:57.606999  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:57.631354  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:57.989078  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 12:27:58.045687  568833 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:27:58.045820  568833 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 12:27:58.107042  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:58.131217  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:58.487704  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:58.607417  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:58.631472  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:58.987820  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:59.107319  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:59.131572  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:59.488528  568833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 12:27:59.606774  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:27:59.630721  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:27:59.988928  568833 kapi.go:107] duration metric: took 1m31.004924s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 12:28:00.107406  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:00.131269  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:00.607405  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:00.631316  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:01.188213  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:01.188405  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:01.606965  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:01.630805  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:02.108926  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:02.131285  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:02.607728  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:02.630957  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:03.106526  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:03.131570  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:03.607095  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:03.631197  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:04.107547  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:04.131700  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:04.607792  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:04.708049  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:05.108186  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:05.133064  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:05.607159  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:05.630988  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:06.107570  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:06.131592  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:06.607565  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:06.631414  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:07.106976  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:07.131091  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:07.607112  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:07.631032  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:08.107826  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:08.131246  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:08.607627  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:08.631731  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 12:28:09.106994  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:09.206861  568833 kapi.go:107] duration metric: took 1m33.579390477s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 12:28:09.208626  568833 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-850167 cluster.
	I0929 12:28:09.210155  568833 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 12:28:09.211802  568833 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 12:28:09.608037  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:10.107942  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:10.607677  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:11.108336  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:11.607027  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:12.108096  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:12.606529  568833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 12:28:13.107756  568833 kapi.go:107] duration metric: took 1m43.504642885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 12:28:13.109643  568833 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 12:28:13.111066  568833 addons.go:514] duration metric: took 1m45.864198091s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 12:28:13.111125  568833 start.go:246] waiting for cluster config update ...
	I0929 12:28:13.111153  568833 start.go:255] writing updated cluster config ...
	I0929 12:28:13.111502  568833 ssh_runner.go:195] Run: rm -f paused
	I0929 12:28:13.116086  568833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:28:13.119772  568833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cwwm4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.125577  568833 pod_ready.go:94] pod "coredns-66bc5c9577-cwwm4" is "Ready"
	I0929 12:28:13.125606  568833 pod_ready.go:86] duration metric: took 5.811021ms for pod "coredns-66bc5c9577-cwwm4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.128122  568833 pod_ready.go:83] waiting for pod "etcd-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.132134  568833 pod_ready.go:94] pod "etcd-addons-850167" is "Ready"
	I0929 12:28:13.132158  568833 pod_ready.go:86] duration metric: took 4.011312ms for pod "etcd-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.134303  568833 pod_ready.go:83] waiting for pod "kube-apiserver-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.138566  568833 pod_ready.go:94] pod "kube-apiserver-addons-850167" is "Ready"
	I0929 12:28:13.138591  568833 pod_ready.go:86] duration metric: took 4.268985ms for pod "kube-apiserver-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.141124  568833 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.520590  568833 pod_ready.go:94] pod "kube-controller-manager-addons-850167" is "Ready"
	I0929 12:28:13.520618  568833 pod_ready.go:86] duration metric: took 379.467799ms for pod "kube-controller-manager-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:13.721044  568833 pod_ready.go:83] waiting for pod "kube-proxy-m2q9m" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:14.120559  568833 pod_ready.go:94] pod "kube-proxy-m2q9m" is "Ready"
	I0929 12:28:14.120594  568833 pod_ready.go:86] duration metric: took 399.519747ms for pod "kube-proxy-m2q9m" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:14.320142  568833 pod_ready.go:83] waiting for pod "kube-scheduler-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:14.720865  568833 pod_ready.go:94] pod "kube-scheduler-addons-850167" is "Ready"
	I0929 12:28:14.720921  568833 pod_ready.go:86] duration metric: took 400.743571ms for pod "kube-scheduler-addons-850167" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:28:14.720933  568833 pod_ready.go:40] duration metric: took 1.604809309s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:28:14.769053  568833 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:28:14.772741  568833 out.go:179] * Done! kubectl is now configured to use "addons-850167" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.866949423Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.866989940Z" level=info msg="Removed pod sandbox: 3e42f34abcd18d6879bafa1b53ecd411a8d553edd878fb22a7264a93942c6956" id=896fc26d-b607-4e2b-b402-0823a8ab47bd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.867437289Z" level=info msg="Stopping pod sandbox: 5be7bc84e7052eb3442535c6b7962b5571df7bb6281598613c8f878abedbd2c7" id=996d8a5b-e5b6-4dfc-a1be-6c7d2bbd3527 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.867474925Z" level=info msg="Stopped pod sandbox (already stopped): 5be7bc84e7052eb3442535c6b7962b5571df7bb6281598613c8f878abedbd2c7" id=996d8a5b-e5b6-4dfc-a1be-6c7d2bbd3527 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.867817385Z" level=info msg="Removing pod sandbox: 5be7bc84e7052eb3442535c6b7962b5571df7bb6281598613c8f878abedbd2c7" id=633d1b8e-76d9-4aef-be1b-9b54dd52172c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.874125930Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.874160118Z" level=info msg="Removed pod sandbox: 5be7bc84e7052eb3442535c6b7962b5571df7bb6281598613c8f878abedbd2c7" id=633d1b8e-76d9-4aef-be1b-9b54dd52172c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.874600750Z" level=info msg="Stopping pod sandbox: cacaf0f98da89d97cbeb0b97d54e86b1095678b1346d5b56dd4f7bf1d4c4685b" id=b605b8ed-77a4-4fb8-9e0d-8f1cadbe5cbb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.874641700Z" level=info msg="Stopped pod sandbox (already stopped): cacaf0f98da89d97cbeb0b97d54e86b1095678b1346d5b56dd4f7bf1d4c4685b" id=b605b8ed-77a4-4fb8-9e0d-8f1cadbe5cbb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.874942249Z" level=info msg="Removing pod sandbox: cacaf0f98da89d97cbeb0b97d54e86b1095678b1346d5b56dd4f7bf1d4c4685b" id=7e2c631b-78a7-4f3b-84b3-8eed50eed8a6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.881904435Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 12:30:21 addons-850167 crio[931]: time="2025-09-29 12:30:21.881948269Z" level=info msg="Removed pod sandbox: cacaf0f98da89d97cbeb0b97d54e86b1095678b1346d5b56dd4f7bf1d4c4685b" id=7e2c631b-78a7-4f3b-84b3-8eed50eed8a6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.218439466Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9bkl5/POD" id=5880be1e-059a-4b4d-a488-02424dc671df name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.218510473Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.240230202Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9bkl5 Namespace:default ID:dc16e300780ac5ab50448145aac3ae2dc968631410d1bcd0a090e7023980efcc UID:84102993-de53-415f-874a-c2112e74607e NetNS:/var/run/netns/6c506900-149e-4972-8d4b-e8600cf18aaf Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.240288427Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9bkl5 to CNI network \"kindnet\" (type=ptp)"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.252067068Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9bkl5 Namespace:default ID:dc16e300780ac5ab50448145aac3ae2dc968631410d1bcd0a090e7023980efcc UID:84102993-de53-415f-874a-c2112e74607e NetNS:/var/run/netns/6c506900-149e-4972-8d4b-e8600cf18aaf Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.252213863Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9bkl5 for CNI network kindnet (type=ptp)"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.253271144Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.254036119Z" level=info msg="Ran pod sandbox dc16e300780ac5ab50448145aac3ae2dc968631410d1bcd0a090e7023980efcc with infra container: default/hello-world-app-5d498dc89-9bkl5/POD" id=5880be1e-059a-4b4d-a488-02424dc671df name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.255435840Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=972dee3b-31d0-4582-ad0a-2704e5510521 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.255680651Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=972dee3b-31d0-4582-ad0a-2704e5510521 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.256335960Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=90f37afe-df31-4504-b39b-98b67c4c5efc name=/runtime.v1.ImageService/PullImage
	Sep 29 12:31:17 addons-850167 crio[931]: time="2025-09-29 12:31:17.268719014Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 12:31:18 addons-850167 crio[931]: time="2025-09-29 12:31:18.125505077Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fbeea935a6f13       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   c2b75cf1b0623       nginx
	4a42df2b043a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   2debe5ce8e7e1       busybox
	4945b1b5a536b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago       Running             gadget                    0                   704cf05f16ef4       gadget-pz2rt
	7591758c9655d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   ba0ff8e38f881       ingress-nginx-controller-9cc49f96f-zjjcn
	6e5a66f052132       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              patch                     0                   d6449062530b0       ingress-nginx-admission-patch-mfcz4
	b42052ec1763e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   4586655f1f883       kube-ingress-dns-minikube
	8fb5434a2b6c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   829782c4b22f6       ingress-nginx-admission-create-pcvrj
	8b32193ee9a78       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   e3b965d590558       coredns-66bc5c9577-cwwm4
	22db61573f9f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   6d4aebaed4a14       storage-provisioner
	0e5129bb9d8c1       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago       Running             kube-proxy                0                   c01391cee6355       kube-proxy-m2q9m
	7f28625b33735       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago       Running             kindnet-cni               0                   fbe8549d3fce4       kindnet-8zmwn
	583a32b411aa1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   e1b255e4ba96c       kube-scheduler-addons-850167
	b104f264c2a03       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   ac97d3af203c8       kube-controller-manager-addons-850167
	a3f58b1ddc86e       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   41d0b55667226       kube-apiserver-addons-850167
	a8d4dd131dff9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   b14f0ac96919c       etcd-addons-850167
	
	
	==> coredns [8b32193ee9a784019c157dfa0da0bc240097caea6b3f3b356a0b592c1ff2d853] <==
	[INFO] 10.244.0.13:51534 - 18952 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000159369s
	[INFO] 10.244.0.13:57503 - 34131 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090612s
	[INFO] 10.244.0.13:57503 - 33827 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00012846s
	[INFO] 10.244.0.13:45941 - 22416 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000073649s
	[INFO] 10.244.0.13:45941 - 22176 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000111928s
	[INFO] 10.244.0.13:56640 - 8846 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011468s
	[INFO] 10.244.0.13:56640 - 8634 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156421s
	[INFO] 10.244.0.22:42951 - 40879 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178509s
	[INFO] 10.244.0.22:46915 - 46374 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259851s
	[INFO] 10.244.0.22:34992 - 18878 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149455s
	[INFO] 10.244.0.22:34847 - 50510 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000188304s
	[INFO] 10.244.0.22:57121 - 40305 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108569s
	[INFO] 10.244.0.22:43030 - 27107 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156508s
	[INFO] 10.244.0.22:45356 - 1465 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003553836s
	[INFO] 10.244.0.22:33122 - 7798 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003595105s
	[INFO] 10.244.0.22:56430 - 38176 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00469978s
	[INFO] 10.244.0.22:47638 - 53419 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004901379s
	[INFO] 10.244.0.22:52082 - 59950 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004412009s
	[INFO] 10.244.0.22:55766 - 11520 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004985794s
	[INFO] 10.244.0.22:48099 - 63823 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004709061s
	[INFO] 10.244.0.22:39637 - 38689 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00478628s
	[INFO] 10.244.0.22:60507 - 37363 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00225584s
	[INFO] 10.244.0.22:37635 - 19217 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002967713s
	[INFO] 10.244.0.26:49776 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221467s
	[INFO] 10.244.0.26:57428 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013008s
	
	
	==> describe nodes <==
	Name:               addons-850167
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-850167
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=addons-850167
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_26_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-850167
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:26:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-850167
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:31:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:29:25 +0000   Mon, 29 Sep 2025 12:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:29:25 +0000   Mon, 29 Sep 2025 12:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:29:25 +0000   Mon, 29 Sep 2025 12:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:29:25 +0000   Mon, 29 Sep 2025 12:27:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-850167
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d9b5e28c4bf479d9b92fb2f3d625d24
	  System UUID:                8ad2e62c-c384-44a0-9df0-49234ebc1e32
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-9bkl5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-pz2rt                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-zjjcn    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-cwwm4                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m51s
	  kube-system                 etcd-addons-850167                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m57s
	  kube-system                 kindnet-8zmwn                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m51s
	  kube-system                 kube-apiserver-addons-850167                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-addons-850167       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-proxy-m2q9m                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-850167                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m50s  kube-proxy       
	  Normal  Starting                 4m57s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s  kubelet          Node addons-850167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s  kubelet          Node addons-850167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s  kubelet          Node addons-850167 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m52s  node-controller  Node addons-850167 event: Registered Node addons-850167 in Controller
	  Normal  NodeReady                4m10s  kubelet          Node addons-850167 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 1d 17 83 9b cd 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 2d e6 8e 79 5a 08 06
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [a8d4dd131dff99ea276e667659efe15a0280040c4b1353d406d1c9e8f960dfbb] <==
	{"level":"warn","ts":"2025-09-29T12:26:18.658515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.669087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.677824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.684700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.691445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.698967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.705633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.712259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.720568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.727202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.734779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.741164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.748726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.755031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.766775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.774222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.781588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:18.834314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:30.177877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:56.154086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:56.160843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:56.279009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:26:56.291102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:28:47.631366Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.640306ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040294607248815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-nxj9b\" mod_revision:1477 > success:<request_delete_range:<key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-nxj9b\" > > failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-nxj9b\" > >>","response":"size:5150"}
	{"level":"info","ts":"2025-09-29T12:28:47.631472Z","caller":"traceutil/trace.go:172","msg":"trace[60665208] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1478; }","duration":"203.192306ms","start":"2025-09-29T12:28:47.428265Z","end":"2025-09-29T12:28:47.631457Z","steps":["trace[60665208] 'process raft request'  (duration: 67.872866ms)","trace[60665208] 'compare'  (duration: 134.566458ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:31:18 up  2:13,  0 users,  load average: 0.38, 1.76, 2.73
	Linux addons-850167 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7f28625b33735a5ef1816ae8860e1b7fe82f923dd9ff5ef23d7a3c7e9aea5834] <==
	I0929 12:29:18.193664       1 main.go:301] handling current node
	I0929 12:29:28.194309       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:29:28.194357       1 main.go:301] handling current node
	I0929 12:29:38.193487       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:29:38.193584       1 main.go:301] handling current node
	I0929 12:29:48.193095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:29:48.193171       1 main.go:301] handling current node
	I0929 12:29:58.193973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:29:58.194010       1 main.go:301] handling current node
	I0929 12:30:08.194265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:08.194306       1 main.go:301] handling current node
	I0929 12:30:18.201981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:18.202030       1 main.go:301] handling current node
	I0929 12:30:28.201964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:28.202004       1 main.go:301] handling current node
	I0929 12:30:38.193524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:38.193568       1 main.go:301] handling current node
	I0929 12:30:48.200968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:48.201011       1 main.go:301] handling current node
	I0929 12:30:58.202016       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:58.202168       1 main.go:301] handling current node
	I0929 12:31:08.199003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:08.199053       1 main.go:301] handling current node
	I0929 12:31:18.192992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:18.193055       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a3f58b1ddc86e61bc7dbeeef358b5ef739e5e30147fdad42d1c0ac59a07f0c47] <==
	E0929 12:28:24.507261       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51664: use of closed network connection
	E0929 12:28:24.696915       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51692: use of closed network connection
	I0929 12:28:33.850024       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.40.197"}
	I0929 12:28:34.927819       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:28:49.951474       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 12:28:50.127190       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.101.248"}
	I0929 12:28:55.051842       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 12:29:03.189625       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 12:29:21.216197       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0929 12:29:23.037650       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 12:29:23.037699       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 12:29:23.057578       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 12:29:23.057625       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 12:29:23.071845       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 12:29:23.072007       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 12:29:23.115070       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 12:29:23.115223       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 12:29:24.058998       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 12:29:24.116005       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 12:29:24.128501       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 12:29:37.222946       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:37.830765       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 12:30:17.560021       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:04.375769       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:16.988031       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.6.255"}
	
	
	==> kube-controller-manager [b104f264c2a038700371a5329fbdf9a30a12765732e19bdd9f9f105166e4c1b4] <==
	E0929 12:29:31.551251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:31.597334       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:31.598460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:34.693878       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:34.694964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:42.257615       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:42.258727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:42.705090       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:42.706326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:43.071216       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:43.072345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:29:59.335283       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:29:59.336344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:30:00.812049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:30:00.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:30:05.371494       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:30:05.372575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:30:31.642094       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:30:31.643208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:30:32.339348       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:30:32.340395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:30:43.476580       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:30:43.477865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 12:31:08.205150       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 12:31:08.206378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0e5129bb9d8c1ec7ffdc91c7c5cc2720b3d1f11c8d66b6679d1e25cb5ba59fb6] <==
	I0929 12:26:27.943524       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:26:28.135180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:26:28.236643       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:26:28.236705       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:26:28.241029       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:26:28.370644       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:26:28.370783       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:26:28.387116       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:26:28.387537       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:26:28.387555       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:26:28.393040       1 config.go:200] "Starting service config controller"
	I0929 12:26:28.393139       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:26:28.393760       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:26:28.393843       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:26:28.394825       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:26:28.394843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:26:28.395681       1 config.go:309] "Starting node config controller"
	I0929 12:26:28.395692       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:26:28.395701       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:26:28.499644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:26:28.499681       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:26:28.499656       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [583a32b411aa1c3bd279905a276f5bd71a6491cca15ef8cf61c4d94807b3dd97] <==
	E0929 12:26:19.255663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:26:19.255679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:26:19.255729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:26:19.255750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:26:19.255798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:26:19.255807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:26:19.255810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:26:19.255838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:26:19.255936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:26:19.255940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:26:19.255967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:26:19.255983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:26:19.256069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:26:19.256066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:26:19.256067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:26:20.127920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:26:20.127929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:26:20.335557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:26:20.386122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:26:20.430457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:26:20.441956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:26:20.477803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:26:20.506099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 12:26:20.579541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I0929 12:26:22.353347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:29:36 addons-850167 kubelet[1548]: E0929 12:29:36.288493    1548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853\": container with ID starting with 1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853 not found: ID does not exist" containerID="1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853"
	Sep 29 12:29:36 addons-850167 kubelet[1548]: I0929 12:29:36.288545    1548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853"} err="failed to get container status \"1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853\": rpc error: code = NotFound desc = could not find container \"1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853\": container with ID starting with 1c6359fee4c7fe2dc179bade9791b524e7a089f1bc3532c3c757d1ab402fe853 not found: ID does not exist"
	Sep 29 12:29:37 addons-850167 kubelet[1548]: I0929 12:29:37.550146    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa40d030-e55a-40f9-9a62-d5aebdc24908" path="/var/lib/kubelet/pods/fa40d030-e55a-40f9-9a62-d5aebdc24908/volumes"
	Sep 29 12:29:41 addons-850167 kubelet[1548]: E0929 12:29:41.591286    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148981591022222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:29:41 addons-850167 kubelet[1548]: E0929 12:29:41.591340    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148981591022222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:29:51 addons-850167 kubelet[1548]: E0929 12:29:51.593758    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148991593524807  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:29:51 addons-850167 kubelet[1548]: E0929 12:29:51.593797    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148991593524807  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:01 addons-850167 kubelet[1548]: E0929 12:30:01.597009    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149001596645668  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:01 addons-850167 kubelet[1548]: E0929 12:30:01.597049    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149001596645668  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:11 addons-850167 kubelet[1548]: E0929 12:30:11.600114    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149011599775950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:11 addons-850167 kubelet[1548]: E0929 12:30:11.600151    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149011599775950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:21 addons-850167 kubelet[1548]: E0929 12:30:21.603548    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149021603180799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:21 addons-850167 kubelet[1548]: E0929 12:30:21.603594    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149021603180799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:26 addons-850167 kubelet[1548]: I0929 12:30:26.548512    1548 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 12:30:31 addons-850167 kubelet[1548]: E0929 12:30:31.606741    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149031606427720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:31 addons-850167 kubelet[1548]: E0929 12:30:31.606776    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149031606427720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:41 addons-850167 kubelet[1548]: E0929 12:30:41.610506    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149041610139108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:41 addons-850167 kubelet[1548]: E0929 12:30:41.610548    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149041610139108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:51 addons-850167 kubelet[1548]: E0929 12:30:51.613691    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149051613342953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:30:51 addons-850167 kubelet[1548]: E0929 12:30:51.613737    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149051613342953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:31:01 addons-850167 kubelet[1548]: E0929 12:31:01.616729    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149061616447150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:31:01 addons-850167 kubelet[1548]: E0929 12:31:01.616770    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149061616447150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:31:11 addons-850167 kubelet[1548]: E0929 12:31:11.620385    1548 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149071619964740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:31:11 addons-850167 kubelet[1548]: E0929 12:31:11.620426    1548 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149071619964740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:609847}  inodes_used:{value:230}}"
	Sep 29 12:31:17 addons-850167 kubelet[1548]: I0929 12:31:17.009272    1548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsxq\" (UniqueName: \"kubernetes.io/projected/84102993-de53-415f-874a-c2112e74607e-kube-api-access-bpsxq\") pod \"hello-world-app-5d498dc89-9bkl5\" (UID: \"84102993-de53-415f-874a-c2112e74607e\") " pod="default/hello-world-app-5d498dc89-9bkl5"
	
	
	==> storage-provisioner [22db61573f9f201f139547dbd6d3034d45ef81a8e72e90888549abeeddde46e8] <==
	W0929 12:30:54.495483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:30:56.499340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:30:56.505526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:30:58.509281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:30:58.513647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:00.518347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:00.524488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:02.528785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:02.533336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:04.537179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:04.542926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:06.546358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:06.550868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:08.553792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:08.559166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:10.562544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:10.566651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:12.570110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:12.575649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:14.579315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:14.583616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:16.587023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:16.592557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:18.595975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:18.600832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-850167 -n addons-850167
helpers_test.go:269: (dbg) Run:  kubectl --context addons-850167 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-9bkl5 ingress-nginx-admission-create-pcvrj ingress-nginx-admission-patch-mfcz4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-850167 describe pod hello-world-app-5d498dc89-9bkl5 ingress-nginx-admission-create-pcvrj ingress-nginx-admission-patch-mfcz4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-850167 describe pod hello-world-app-5d498dc89-9bkl5 ingress-nginx-admission-create-pcvrj ingress-nginx-admission-patch-mfcz4: exit status 1 (77.341071ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-9bkl5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-850167/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:16 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpsxq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bpsxq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-9bkl5 to addons-850167
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pcvrj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mfcz4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-850167 describe pod hello-world-app-5d498dc89-9bkl5 ingress-nginx-admission-create-pcvrj ingress-nginx-admission-patch-mfcz4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable ingress-dns --alsologtostderr -v=1: (1.028623153s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable ingress --alsologtostderr -v=1: (7.73103588s)
--- FAIL: TestAddons/parallel/Ingress (158.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-253578 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-253578 expose deployment hello-node-connect --type=NodePort --port=8080
2025/09/29 12:34:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6hlmr" [7777ae1a-135e-42b9-a22c-79f2de55f788] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 12:44:28.970339141 +0000 UTC m=+1139.270375609
functional_test.go:1645: (dbg) Run:  kubectl --context functional-253578 describe po hello-node-connect-7d85dfc575-6hlmr -n default
functional_test.go:1645: (dbg) kubectl --context functional-253578 describe po hello-node-connect-7d85dfc575-6hlmr -n default:
Name:             hello-node-connect-7d85dfc575-6hlmr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-253578/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fhrx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7fhrx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6hlmr to functional-253578
Normal   Pulling    5m18s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     5m18s (x5 over 9m27s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     5m18s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed     4m14s (x16 over 9m26s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m11s (x21 over 9m26s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-253578 logs hello-node-connect-7d85dfc575-6hlmr -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-253578 logs hello-node-connect-7d85dfc575-6hlmr -n default: exit status 1 (83.360473ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6hlmr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-253578 logs hello-node-connect-7d85dfc575-6hlmr -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-253578 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6hlmr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-253578/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fhrx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7fhrx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6hlmr to functional-253578
Normal   Pulling    5m18s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     5m18s (x5 over 9m27s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     5m18s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed     4m14s (x16 over 9m26s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m11s (x21 over 9m26s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-253578 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-253578 logs -l app=hello-node-connect: exit status 1 (73.400963ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6hlmr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-253578 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-253578 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.222.117
IPs:                      10.99.222.117
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31896/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-253578
helpers_test.go:243: (dbg) docker inspect functional-253578:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3",
	        "Created": "2025-09-29T12:32:29.072691477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 593555,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:32:29.11135704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/hosts",
	        "LogPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3-json.log",
	        "Name": "/functional-253578",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-253578:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-253578",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3",
	                "LowerDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253578",
	                "Source": "/var/lib/docker/volumes/functional-253578/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253578",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253578",
	                "name.minikube.sigs.k8s.io": "functional-253578",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b74555dc99ea41e1fdf56c4a7f3c2858156d841aaa590bb51ee17e60a94dd1d2",
	            "SandboxKey": "/var/run/docker/netns/b74555dc99ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253578": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:56:ea:80:c1:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ef9fc70993bc556b3b99be84e9e150092592395f1751c65f8fa1ccc28c5096d",
	                    "EndpointID": "ee62973932a58a0fbb41ebddf22d29fd2522ae60f4808a7381f6aa5a90fad6bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253578",
	                        "c6737fc56ae0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-253578 -n functional-253578
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 logs -n 25: (1.631620728s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-253578 ssh sudo umount -f /mount-9p                                                                                    │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdspecific-port4019347123/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh -- ls -la /mount-9p                                                                                         │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh sudo umount -f /mount-9p                                                                                    │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount1                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount3 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount1 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount2 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount1                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh findmnt -T /mount2                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh findmnt -T /mount3                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ mount          │ -p functional-253578 --kill=true                                                                                                  │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format short --alsologtostderr                                                                       │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format yaml --alsologtostderr                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ ssh            │ functional-253578 ssh pgrep buildkitd                                                                                             │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │                     │
	│ image          │ functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr                            │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls                                                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format json --alsologtostderr                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format table --alsologtostderr                                                                       │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ service        │ functional-253578 service list                                                                                                    │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:44 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:34:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:34:12.460221  605890 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:34:12.460584  605890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.460600  605890 out.go:374] Setting ErrFile to fd 2...
	I0929 12:34:12.460607  605890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.462391  605890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:34:12.463952  605890 out.go:368] Setting JSON to false
	I0929 12:34:12.465334  605890 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8197,"bootTime":1759141055,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:34:12.465493  605890 start.go:140] virtualization: kvm guest
	I0929 12:34:12.467687  605890 out.go:179] * [functional-253578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:34:12.469770  605890 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:34:12.469794  605890 notify.go:220] Checking for updates...
	I0929 12:34:12.472720  605890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:34:12.474283  605890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:34:12.476335  605890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:34:12.478097  605890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:34:12.482503  605890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:34:12.484445  605890 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:34:12.485021  605890 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:34:12.510637  605890 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:34:12.510792  605890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:12.577265  605890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:34:12.563704278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:12.577417  605890 docker.go:318] overlay module found
	I0929 12:34:12.579717  605890 out.go:179] * Using the docker driver based on existing profile
	I0929 12:34:12.581525  605890 start.go:304] selected driver: docker
	I0929 12:34:12.581551  605890 start.go:924] validating driver "docker" against &{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:12.581671  605890 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:34:12.581788  605890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:12.664262  605890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:63 SystemTime:2025-09-29 12:34:12.649028618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:12.665218  605890 cni.go:84] Creating CNI manager for ""
	I0929 12:34:12.665324  605890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 12:34:12.665413  605890 start.go:348] cluster config:
	{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:12.668477  605890 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 12:41:56 functional-253578 crio[4197]: time="2025-09-29 12:41:56.757210668Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5cbd8459-0b26-48c5-9582-50814b7f30b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:41:56 functional-253578 crio[4197]: time="2025-09-29 12:41:56.757480454Z" level=info msg="Image docker.io/nginx:alpine not found" id=5cbd8459-0b26-48c5-9582-50814b7f30b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:41:57 functional-253578 crio[4197]: time="2025-09-29 12:41:57.756987168Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=591db66c-e4a3-401d-af7f-75cf572954d3 name=/runtime.v1.ImageService/PullImage
	Sep 29 12:42:09 functional-253578 crio[4197]: time="2025-09-29 12:42:09.757008525Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7e2e7f9e-a002-4330-baf1-d5860cd6a3c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:09 functional-253578 crio[4197]: time="2025-09-29 12:42:09.757346814Z" level=info msg="Image docker.io/nginx:alpine not found" id=7e2e7f9e-a002-4330-baf1-d5860cd6a3c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:21 functional-253578 crio[4197]: time="2025-09-29 12:42:21.757192430Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=67cf7c62-33cd-406c-a5c6-09ea8fa7fab3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:21 functional-253578 crio[4197]: time="2025-09-29 12:42:21.757513090Z" level=info msg="Image docker.io/nginx:alpine not found" id=67cf7c62-33cd-406c-a5c6-09ea8fa7fab3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:33 functional-253578 crio[4197]: time="2025-09-29 12:42:33.756844294Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=68923b2d-8a75-4d64-91f1-bda9649fa940 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:33 functional-253578 crio[4197]: time="2025-09-29 12:42:33.757112727Z" level=info msg="Image docker.io/nginx:alpine not found" id=68923b2d-8a75-4d64-91f1-bda9649fa940 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:44 functional-253578 crio[4197]: time="2025-09-29 12:42:44.756877538Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c8e1bb40-88ed-4e45-989c-40f8ed8baa9a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:44 functional-253578 crio[4197]: time="2025-09-29 12:42:44.757186590Z" level=info msg="Image docker.io/nginx:alpine not found" id=c8e1bb40-88ed-4e45-989c-40f8ed8baa9a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:57 functional-253578 crio[4197]: time="2025-09-29 12:42:57.756863775Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=de71637f-0910-4921-9a4b-b8e7c48481d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:42:57 functional-253578 crio[4197]: time="2025-09-29 12:42:57.757270835Z" level=info msg="Image docker.io/nginx:alpine not found" id=de71637f-0910-4921-9a4b-b8e7c48481d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:43:10 functional-253578 crio[4197]: time="2025-09-29 12:43:10.757320074Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3b3611aa-1c02-4c7e-b4b6-b4a470b0fd51 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:43:10 functional-253578 crio[4197]: time="2025-09-29 12:43:10.757597293Z" level=info msg="Image docker.io/nginx:alpine not found" id=3b3611aa-1c02-4c7e-b4b6-b4a470b0fd51 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:43:10 functional-253578 crio[4197]: time="2025-09-29 12:43:10.758249963Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6ec16b7e-aa63-49a9-8fc5-efd1d4e4a0a2 name=/runtime.v1.ImageService/PullImage
	Sep 29 12:43:10 functional-253578 crio[4197]: time="2025-09-29 12:43:10.762614798Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 12:43:49 functional-253578 crio[4197]: time="2025-09-29 12:43:49.756770846Z" level=info msg="Pulling image: docker.io/nginx:latest" id=390e4138-ca4a-4ac5-963e-a414dc415aec name=/runtime.v1.ImageService/PullImage
	Sep 29 12:43:49 functional-253578 crio[4197]: time="2025-09-29 12:43:49.758340778Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 12:43:55 functional-253578 crio[4197]: time="2025-09-29 12:43:55.757228992Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=dfd51a58-6665-420c-8c4b-9b273769c868 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:43:55 functional-253578 crio[4197]: time="2025-09-29 12:43:55.757452739Z" level=info msg="Image docker.io/nginx:alpine not found" id=dfd51a58-6665-420c-8c4b-9b273769c868 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:44:07 functional-253578 crio[4197]: time="2025-09-29 12:44:07.757426510Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=bef12f19-4f45-47ef-985f-1349c9f9bad3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:44:07 functional-253578 crio[4197]: time="2025-09-29 12:44:07.757684146Z" level=info msg="Image docker.io/nginx:alpine not found" id=bef12f19-4f45-47ef-985f-1349c9f9bad3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:44:19 functional-253578 crio[4197]: time="2025-09-29 12:44:19.756766845Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6ef795bc-c6d4-4476-8f07-e0d39ab98098 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:44:19 functional-253578 crio[4197]: time="2025-09-29 12:44:19.757049551Z" level=info msg="Image docker.io/nginx:alpine not found" id=6ef795bc-c6d4-4476-8f07-e0d39ab98098 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	800a9e2ed1b10       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   09656a76ee1e8       dashboard-metrics-scraper-77bf4d6c4c-r4lpn
	796da96136821       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   85e69fb78158e       kubernetes-dashboard-855c9754f9-6mxlr
	e97d02e8257e4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   5abdfdffde5eb       busybox-mount
	2f0d49822c726       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  10 minutes ago      Running             mysql                       0                   f97dad052ace4       mysql-5bb876957f-pqwk8
	a3478acb5635f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   bc07a337d7810       storage-provisioner
	3d79e8ee7d77a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   96c6eddb78178       kube-apiserver-functional-253578
	0294f4a19f054       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     1                   54137805d232e       kube-controller-manager-functional-253578
	ca9a8b95e7b2b       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              1                   3271bc4776308       kube-scheduler-functional-253578
	51149fc5368ee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   180bb87ad7d38       etcd-functional-253578
	43c94dbe7dc9c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Running             kube-proxy                  1                   bc40d2900b353       kube-proxy-l2tmd
	cc462900c191d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   8df66ba628fdb       kindnet-dtwgc
	bb673ce7c8379       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   bc07a337d7810       storage-provisioner
	7812b80a3deb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   11bd7e239cb80       coredns-66bc5c9577-xhr4r
	e76fe9799d496       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   11bd7e239cb80       coredns-66bc5c9577-xhr4r
	39ccea340216b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   8df66ba628fdb       kindnet-dtwgc
	ec2a58946f464       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  0                   bc40d2900b353       kube-proxy-l2tmd
	6e30358b6c034       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   180bb87ad7d38       etcd-functional-253578
	3df4f420a021c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     0                   54137805d232e       kube-controller-manager-functional-253578
	769e1c3d4b1a3       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              0                   3271bc4776308       kube-scheduler-functional-253578
	
	
	==> coredns [7812b80a3deb48ea0c68dfcb9b3e1e54e1ad4007f16da467c75256791dddb9a7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44221 - 14032 "HINFO IN 26357756520382258.83809730824062614. udp 53 false 512" NXDOMAIN qr,rd,ra 128 0.135411891s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e76fe9799d496dd8a613aebb5c91267f0ea188acb78d607135fc999c8ab32fff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34133 - 33964 "HINFO IN 6877996517575220631.5899426839076039489. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06863097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-253578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-253578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-253578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_32_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-253578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:40:41 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:40:41 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:40:41 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:40:41 +0000   Mon, 29 Sep 2025 12:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-253578
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdd83ba3f8234cbb86b0ac5aaf1fde4b
	  System UUID:                fac40c0a-3a47-4e74-b681-e377844533e0
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-w5d8l                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-6hlmr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-pqwk8                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-xhr4r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-253578                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-dtwgc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-253578              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-253578     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-l2tmd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-253578              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-r4lpn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6mxlr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-253578 event: Registered Node functional-253578 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-253578 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-253578 event: Registered Node functional-253578 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 1d 17 83 9b cd 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 2d e6 8e 79 5a 08 06
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [51149fc5368ee4ec385247b28e8cdf01fc9ac8e065bd224750f6e07f024a4d2d] <==
	{"level":"warn","ts":"2025-09-29T12:33:42.567115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.574177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.580710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.587441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.593824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.600677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.613109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.619755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.627706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.634253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.641650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.648126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.655020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.661617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.668216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.674604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.681203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.687581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.706350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.713092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.719902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.774536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:43:42.252340Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2025-09-29T12:43:42.273039Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1137,"took":"20.325282ms","hash":1517129347,"current-db-size-bytes":3588096,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1712128,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-29T12:43:42.273093Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1517129347,"revision":1137,"compact-revision":-1}
	
	
	==> etcd [6e30358b6c03466bcf50ba7fe5beca23acdec6ae3c1b08a8b64a3e41c334759d] <==
	{"level":"warn","ts":"2025-09-29T12:32:41.129472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.136475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.143028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.161245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.167567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.174906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.218897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:33:38.702710Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:33:38.702826Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-253578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:33:38.702955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:33:38.703111Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:33:38.704688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.704750Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:33:38.704814Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704781Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:33:38.704818Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704784Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704840Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704850Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:33:38.704851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:33:38.704860Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.706723Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:33:38.706803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.706844Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:33:38.706854Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-253578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:44:30 up  2:26,  0 users,  load average: 0.21, 0.33, 1.31
	Linux functional-253578 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [39ccea340216b5df091799c6ed64c61363041c981f6c80a5bfd5f98a46b62130] <==
	I0929 12:32:50.107381       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:32:50.107731       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:32:50.107954       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:32:50.107974       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:32:50.108002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:32:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:32:50.312215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:32:50.312529       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:32:50.312546       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:32:50.312747       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:32:50.704160       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:32:50.704205       1 metrics.go:72] Registering metrics
	I0929 12:32:50.704304       1 controller.go:711] "Syncing nftables rules"
	I0929 12:33:00.313860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:00.313940       1 main.go:301] handling current node
	I0929 12:33:10.319036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:10.319095       1 main.go:301] handling current node
	I0929 12:33:20.316044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:20.316083       1 main.go:301] handling current node
	
	
	==> kindnet [cc462900c191dea9f5eb4b055d6dd9085307c3d362a6d5ad270cca39faee9ca9] <==
	I0929 12:42:29.049150       1 main.go:301] handling current node
	I0929 12:42:39.050876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:42:39.050939       1 main.go:301] handling current node
	I0929 12:42:49.048724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:42:49.048763       1 main.go:301] handling current node
	I0929 12:42:59.057876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:42:59.057938       1 main.go:301] handling current node
	I0929 12:43:09.052768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:09.052820       1 main.go:301] handling current node
	I0929 12:43:19.049258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:19.049309       1 main.go:301] handling current node
	I0929 12:43:29.048765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:29.048813       1 main.go:301] handling current node
	I0929 12:43:39.052073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:39.052110       1 main.go:301] handling current node
	I0929 12:43:49.049547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:49.049606       1 main.go:301] handling current node
	I0929 12:43:59.050948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:43:59.050981       1 main.go:301] handling current node
	I0929 12:44:09.049183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:44:09.049249       1 main.go:301] handling current node
	I0929 12:44:19.048721       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:44:19.048760       1 main.go:301] handling current node
	I0929 12:44:29.049113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:44:29.049150       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d79e8ee7d77ae6fd301406c37a83f7915a059a92a270b7aee96408569007bb5] <==
	I0929 12:34:14.117270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.120.138"}
	I0929 12:34:21.088657       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.59.227"}
	E0929 12:34:24.569999       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58244: use of closed network connection
	E0929 12:34:26.127706       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58270: use of closed network connection
	E0929 12:34:28.409332       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58294: use of closed network connection
	I0929 12:34:28.633450       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.222.117"}
	I0929 12:34:28.990679       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.134.212"}
	I0929 12:34:46.108216       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:00.319679       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:54.929530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:11.550989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:57.725010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:37:13.112021       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:07.824408       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:37.684588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:39:08.723477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:39:48.532320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:40:32.848206       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:41:17.347449       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:41:41.907928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:42:22.934926       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:42:45.883699       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:43:26.302539       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:43:43.181438       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:43:51.700813       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0294f4a19f054b67a348be2a2a92efbf22e14894dacb1dfc181213d62e8337ac] <==
	I0929 12:33:46.585699       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:33:46.585740       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 12:33:46.585705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 12:33:46.585778       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:33:46.585813       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:33:46.585938       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:33:46.590233       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:33:46.591488       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:33:46.598602       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:33:46.598721       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:33:46.598775       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:33:46.598787       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:33:46.598795       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:33:46.600676       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:33:46.600821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:33:46.600982       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-253578"
	I0929 12:33:46.601049       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:33:46.602777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:33:46.612069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0929 12:34:14.001797       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.013150       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.020762       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.024043       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.028366       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.031013       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [3df4f420a021c04d51d68ec55fa84a36e2614c3377933d5b153c6d24fa40dced] <==
	I0929 12:32:48.642128       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 12:32:48.642282       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 12:32:48.643376       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:32:48.643430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:32:48.643444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 12:32:48.643487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 12:32:48.643566       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 12:32:48.643568       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:32:48.643590       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:32:48.643696       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:32:48.643731       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 12:32:48.644792       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:32:48.648123       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:32:48.648172       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 12:32:48.648215       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:32:48.648293       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:32:48.648356       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:32:48.648363       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:32:48.648368       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:32:48.649376       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:32:48.655967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-253578" podCIDRs=["10.244.0.0/24"]
	I0929 12:32:48.656118       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:32:48.665028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:32:48.667261       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:33:03.594309       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [43c94dbe7dc9c929b29996b719b0dc2005bc2e69d609e0aadcde1f77030df3e3] <==
	I0929 12:33:29.707138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:33:29.708158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:30.874489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:33.132459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:37.265606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:33:47.508051       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:33:47.508107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:33:47.508213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:33:47.530001       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:33:47.530082       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:33:47.536525       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:33:47.536932       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:33:47.536965       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:33:47.538421       1 config.go:200] "Starting service config controller"
	I0929 12:33:47.538453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:33:47.538490       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:33:47.538502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:33:47.538529       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:33:47.538540       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:33:47.538620       1 config.go:309] "Starting node config controller"
	I0929 12:33:47.538693       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:33:47.538705       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:33:47.638592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:33:47.638611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:33:47.638658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ec2a58946f464e34e02c47e38f8f410b15c1c019005880f5f2817933668ee71d] <==
	I0929 12:32:49.948815       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:32:50.021006       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:32:50.121941       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:32:50.121993       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:32:50.122134       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:32:50.143394       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:32:50.143467       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:32:50.149237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:32:50.149682       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:32:50.149727       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:32:50.151194       1 config.go:200] "Starting service config controller"
	I0929 12:32:50.151227       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:32:50.151236       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:32:50.151239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:32:50.151271       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:32:50.151298       1 config.go:309] "Starting node config controller"
	I0929 12:32:50.151307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:32:50.151299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:32:50.151315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:32:50.251515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:32:50.252708       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:32:50.252761       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [769e1c3d4b1a3c5c8a52f9e10b7009a65e88feb03b79a7e42c6f9446c56eba9a] <==
	E0929 12:32:41.657485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:32:41.657508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:32:41.657566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:32:41.657589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:32:42.462780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:32:42.472511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 12:32:42.476163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:32:42.513856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:32:42.533695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:32:42.688743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:32:42.716363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:32:42.723650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:32:42.731971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:32:42.742338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:32:42.838084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:32:42.882747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:32:42.929056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:32:42.952335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 12:32:44.953860       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:38.986415       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:33:38.986520       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:38.986652       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:33:38.986689       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:33:38.986730       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:33:38.986760       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca9a8b95e7b2be43ee50ad901ec3a03395f6be36fc6098feb79acdc0e4e61841] <==
	I0929 12:33:41.869362       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:33:43.166585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:33:43.166627       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:33:43.166640       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:33:43.166653       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:33:43.189698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:33:43.189729       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:33:43.191573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:43.191628       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:43.191848       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:33:43.192230       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:33:43.291807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:43:43 functional-253578 kubelet[5141]: E0929 12:43:43.756675    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:43:50 functional-253578 kubelet[5141]: E0929 12:43:50.865638    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149830865443461  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:43:50 functional-253578 kubelet[5141]: E0929 12:43:50.865672    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149830865443461  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:43:51 functional-253578 kubelet[5141]: E0929 12:43:51.756722    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:43:55 functional-253578 kubelet[5141]: E0929 12:43:55.756826    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:43:55 functional-253578 kubelet[5141]: E0929 12:43:55.757825    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5939da94-9f6a-4aac-8729-ac253718f1ae"
	Sep 29 12:44:00 functional-253578 kubelet[5141]: E0929 12:44:00.868008    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149840867604420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:00 functional-253578 kubelet[5141]: E0929 12:44:00.868043    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149840867604420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:02 functional-253578 kubelet[5141]: E0929 12:44:02.756504    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:44:06 functional-253578 kubelet[5141]: E0929 12:44:06.756751    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:44:07 functional-253578 kubelet[5141]: E0929 12:44:07.758091    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5939da94-9f6a-4aac-8729-ac253718f1ae"
	Sep 29 12:44:10 functional-253578 kubelet[5141]: E0929 12:44:10.869635    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149850869416736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:10 functional-253578 kubelet[5141]: E0929 12:44:10.869681    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149850869416736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:13 functional-253578 kubelet[5141]: E0929 12:44:13.756698    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:44:19 functional-253578 kubelet[5141]: E0929 12:44:19.756561    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:44:19 functional-253578 kubelet[5141]: E0929 12:44:19.757388    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5939da94-9f6a-4aac-8729-ac253718f1ae"
	Sep 29 12:44:20 functional-253578 kubelet[5141]: E0929 12:44:20.871068    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149860870815495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:20 functional-253578 kubelet[5141]: E0929 12:44:20.871099    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149860870815495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:21 functional-253578 kubelet[5141]: E0929 12:44:21.085922    5141 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 12:44:21 functional-253578 kubelet[5141]: E0929 12:44:21.085996    5141 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 12:44:21 functional-253578 kubelet[5141]: E0929 12:44:21.086122    5141 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(1728db16-21e0-452c-8f2b-5f89b8ee26af): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:44:21 functional-253578 kubelet[5141]: E0929 12:44:21.086163    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="1728db16-21e0-452c-8f2b-5f89b8ee26af"
	Sep 29 12:44:28 functional-253578 kubelet[5141]: E0929 12:44:28.756760    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:44:30 functional-253578 kubelet[5141]: E0929 12:44:30.872645    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149870872414123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:44:30 functional-253578 kubelet[5141]: E0929 12:44:30.872686    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149870872414123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	
	
	==> kubernetes-dashboard [796da9613682112cd3daed0ed8c548158760a34133e6859dcb74250de90d993b] <==
	2025/09/29 12:34:26 Starting overwatch
	2025/09/29 12:34:26 Using namespace: kubernetes-dashboard
	2025/09/29 12:34:26 Using in-cluster config to connect to apiserver
	2025/09/29 12:34:26 Using secret token for csrf signing
	2025/09/29 12:34:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/29 12:34:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/29 12:34:26 Successful initial request to the apiserver, version: v1.34.0
	2025/09/29 12:34:26 Generating JWE encryption key
	2025/09/29 12:34:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/29 12:34:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/29 12:34:27 Initializing JWE encryption key from synchronized object
	2025/09/29 12:34:27 Creating in-cluster Sidecar client
	2025/09/29 12:34:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/29 12:34:27 Serving insecurely on HTTP port: 9090
	2025/09/29 12:34:57 Successful request to sidecar
	
	
	==> storage-provisioner [a3478acb5635f3fdbaa2cb38bf008fdb1897e15c4449369d698febbcbd0e2a88] <==
	W0929 12:44:06.178315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:08.182337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:08.187962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:10.191430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:10.195938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:12.199166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:12.203465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:14.207115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:14.212464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:16.215875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:16.220163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:18.224109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:18.229234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:20.232500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:20.236918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:22.240706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:22.246072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:24.249909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:24.254218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:26.257689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:26.262450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:28.266086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:28.270474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:30.273751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:44:30.280439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bb673ce7c83793afe0efc78f0c158652713d196f225b09191dedfb61a90492d6] <==
	I0929 12:33:28.638553       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:33:28.640190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
helpers_test.go:269: (dbg) Run:  kubectl --context functional-253578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-253578 describe pod busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-253578 describe pod busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e97d02e8257e46a67f3c8b70df3faf7aeb8423ebdf5d4bda3b5cb61ab7984e11
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 12:34:21 +0000
	      Finished:     Mon, 29 Sep 2025 12:34:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blpbg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-blpbg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-253578
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.21s (8.72s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-w5d8l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5crlj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5crlj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w5d8l to functional-253578
	  Normal   Pulling    5m29s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     5m28s (x5 over 9m29s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     5m28s (x5 over 9m29s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m20s (x16 over 9m28s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m10s (x21 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-6hlmr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fhrx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7fhrx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6hlmr to functional-253578
	  Normal   Pulling    5m20s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     5m20s (x5 over 9m29s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     5m20s (x5 over 9m29s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m16s (x16 over 9m28s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m13s (x21 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxhbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxhbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-253578
	  Warning  Failed     9m29s                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m39s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m7s (x5 over 9m29s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m7s (x4 over 8m27s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m49s (x16 over 9m28s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    107s (x21 over 9m28s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkqbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fkqbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/sp-pod to functional-253578
	  Normal   Pulling    3m55s (x5 over 9m57s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m23s (x5 over 8m58s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m23s (x5 over 8m58s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m17s (x16 over 8m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    71s (x21 over 8m57s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.37s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fd3c807e-5584-4f25-82dd-a2b3d92ef105] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003860873s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-253578 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-253578 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-253578 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-253578 apply -f testdata/storage-provisioner/pod.yaml
I0929 12:34:33.803361  567516 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1728db16-21e0-452c-8f2b-5f89b8ee26af] Pending
helpers_test.go:352: "sp-pod" [1728db16-21e0-452c-8f2b-5f89b8ee26af] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0929 12:34:37.324428  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:59.246392  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:15.385023  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 12:40:34.135434109 +0000 UTC m=+904.435470566
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-253578 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-253578 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-253578/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:34:33 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkqbb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-fkqbb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m1s                default-scheduler  Successfully assigned default/sp-pod to functional-253578
Normal   Pulling    2m5s (x4 over 6m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     91s (x4 over 5m1s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     91s (x4 over 5m1s)  kubelet            Error: ErrImagePull
Normal   BackOff    12s (x11 over 5m)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     12s (x11 over 5m)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-253578 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-253578 logs sp-pod -n default: exit status 1 (78.596373ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-253578 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-253578
helpers_test.go:243: (dbg) docker inspect functional-253578:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3",
	        "Created": "2025-09-29T12:32:29.072691477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 593555,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:32:29.11135704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/hosts",
	        "LogPath": "/var/lib/docker/containers/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3/c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3-json.log",
	        "Name": "/functional-253578",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-253578:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-253578",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6737fc56ae04098a1c8757936e00c70c547e470016c3bd832a2794d677926e3",
	                "LowerDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6a70764f57c78a09b3b19ed64791d16cb699b5c060c14f4a47e2cf1e9f92b09/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-253578",
	                "Source": "/var/lib/docker/volumes/functional-253578/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253578",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253578",
	                "name.minikube.sigs.k8s.io": "functional-253578",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b74555dc99ea41e1fdf56c4a7f3c2858156d841aaa590bb51ee17e60a94dd1d2",
	            "SandboxKey": "/var/run/docker/netns/b74555dc99ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253578": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:56:ea:80:c1:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ef9fc70993bc556b3b99be84e9e150092592395f1751c65f8fa1ccc28c5096d",
	                    "EndpointID": "ee62973932a58a0fbb41ebddf22d29fd2522ae60f4808a7381f6aa5a90fad6bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253578",
	                        "c6737fc56ae0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-253578 -n functional-253578
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 logs -n 25: (1.656328473s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-253578 ssh stat /mount-9p/created-by-pod                                                                               │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh sudo umount -f /mount-9p                                                                                    │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdspecific-port4019347123/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh -- ls -la /mount-9p                                                                                         │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh sudo umount -f /mount-9p                                                                                    │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount1                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount3 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount1 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ mount          │ -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount2 --alsologtostderr -v=1                 │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ ssh            │ functional-253578 ssh findmnt -T /mount1                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh findmnt -T /mount2                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ ssh            │ functional-253578 ssh findmnt -T /mount3                                                                                          │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │ 29 Sep 25 12:34 UTC │
	│ mount          │ -p functional-253578 --kill=true                                                                                                  │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:34 UTC │                     │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ update-context │ functional-253578 update-context --alsologtostderr -v=2                                                                           │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format short --alsologtostderr                                                                       │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format yaml --alsologtostderr                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ ssh            │ functional-253578 ssh pgrep buildkitd                                                                                             │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │                     │
	│ image          │ functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr                            │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls                                                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format json --alsologtostderr                                                                        │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	│ image          │ functional-253578 image ls --format table --alsologtostderr                                                                       │ functional-253578 │ jenkins │ v1.37.0 │ 29 Sep 25 12:40 UTC │ 29 Sep 25 12:40 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:34:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:34:12.460221  605890 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:34:12.460584  605890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.460600  605890 out.go:374] Setting ErrFile to fd 2...
	I0929 12:34:12.460607  605890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.462391  605890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:34:12.463952  605890 out.go:368] Setting JSON to false
	I0929 12:34:12.465334  605890 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8197,"bootTime":1759141055,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:34:12.465493  605890 start.go:140] virtualization: kvm guest
	I0929 12:34:12.467687  605890 out.go:179] * [functional-253578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:34:12.469770  605890 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:34:12.469794  605890 notify.go:220] Checking for updates...
	I0929 12:34:12.472720  605890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:34:12.474283  605890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:34:12.476335  605890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:34:12.478097  605890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:34:12.482503  605890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:34:12.484445  605890 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:34:12.485021  605890 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:34:12.510637  605890 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:34:12.510792  605890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:12.577265  605890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:34:12.563704278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:12.577417  605890 docker.go:318] overlay module found
	I0929 12:34:12.579717  605890 out.go:179] * Using the docker driver based on existing profile
	I0929 12:34:12.581525  605890 start.go:304] selected driver: docker
	I0929 12:34:12.581551  605890 start.go:924] validating driver "docker" against &{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:12.581671  605890 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:34:12.581788  605890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:12.664262  605890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:63 SystemTime:2025-09-29 12:34:12.649028618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:12.665218  605890 cni.go:84] Creating CNI manager for ""
	I0929 12:34:12.665324  605890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 12:34:12.665413  605890 start.go:348] cluster config:
	{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:12.668477  605890 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 12:37:38 functional-253578 crio[4197]: time="2025-09-29 12:37:38.798658376Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3e7f1784-47d6-4777-ba96-1d79d1985621 name=/runtime.v1.ImageService/PullImage
	Sep 29 12:37:45 functional-253578 crio[4197]: time="2025-09-29 12:37:45.757619873Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=64db104a-c13d-41cb-9596-c72be5eadd01 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:37:45 functional-253578 crio[4197]: time="2025-09-29 12:37:45.757873076Z" level=info msg="Image docker.io/nginx:alpine not found" id=64db104a-c13d-41cb-9596-c72be5eadd01 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:00 functional-253578 crio[4197]: time="2025-09-29 12:38:00.757907742Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b299b43b-0daf-4596-bf35-988d3e70a4e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:00 functional-253578 crio[4197]: time="2025-09-29 12:38:00.758204251Z" level=info msg="Image docker.io/nginx:alpine not found" id=b299b43b-0daf-4596-bf35-988d3e70a4e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:00 functional-253578 crio[4197]: time="2025-09-29 12:38:00.758757951Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=5aa5b702-98e0-47ce-b9ce-9b386b783501 name=/runtime.v1.ImageService/PullImage
	Sep 29 12:38:00 functional-253578 crio[4197]: time="2025-09-29 12:38:00.762344689Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 12:38:32 functional-253578 crio[4197]: time="2025-09-29 12:38:32.102130850Z" level=info msg="Pulling image: docker.io/nginx:latest" id=1739eb3b-0d64-467c-873f-fd62c264def5 name=/runtime.v1.ImageService/PullImage
	Sep 29 12:38:32 functional-253578 crio[4197]: time="2025-09-29 12:38:32.105191140Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 12:38:45 functional-253578 crio[4197]: time="2025-09-29 12:38:45.757078809Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0e6e5e24-6ff3-47e0-87ef-43ae7f31a7f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:45 functional-253578 crio[4197]: time="2025-09-29 12:38:45.757382295Z" level=info msg="Image docker.io/nginx:alpine not found" id=0e6e5e24-6ff3-47e0-87ef-43ae7f31a7f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:57 functional-253578 crio[4197]: time="2025-09-29 12:38:57.756873042Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b060e0ae-fe5b-499c-91ee-7255f95f332c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:38:57 functional-253578 crio[4197]: time="2025-09-29 12:38:57.757170410Z" level=info msg="Image docker.io/nginx:alpine not found" id=b060e0ae-fe5b-499c-91ee-7255f95f332c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:03 functional-253578 crio[4197]: time="2025-09-29 12:39:03.441824583Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=35de62d6-4bd0-4af9-9b72-4305cc09afbe name=/runtime.v1.ImageService/PullImage
	Sep 29 12:39:10 functional-253578 crio[4197]: time="2025-09-29 12:39:10.758138496Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d31c964c-8a02-4c10-b216-1bedc022be05 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:10 functional-253578 crio[4197]: time="2025-09-29 12:39:10.758354613Z" level=info msg="Image docker.io/nginx:alpine not found" id=d31c964c-8a02-4c10-b216-1bedc022be05 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:11 functional-253578 crio[4197]: time="2025-09-29 12:39:11.757001900Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8709f093-9877-4440-aac2-55464fa457de name=/runtime.v1.ImageService/PullImage
	Sep 29 12:39:24 functional-253578 crio[4197]: time="2025-09-29 12:39:24.756590823Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7d53a62a-2521-4e04-91e7-8fb8d2829955 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:24 functional-253578 crio[4197]: time="2025-09-29 12:39:24.756909414Z" level=info msg="Image docker.io/nginx:alpine not found" id=7d53a62a-2521-4e04-91e7-8fb8d2829955 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:37 functional-253578 crio[4197]: time="2025-09-29 12:39:37.757156817Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ad0cf2ea-cd4b-40ef-b31a-15cd235a3939 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:37 functional-253578 crio[4197]: time="2025-09-29 12:39:37.757452213Z" level=info msg="Image docker.io/nginx:alpine not found" id=ad0cf2ea-cd4b-40ef-b31a-15cd235a3939 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:52 functional-253578 crio[4197]: time="2025-09-29 12:39:52.756357423Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7b37de48-a0a4-4a5a-a720-e495bd5d39d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:52 functional-253578 crio[4197]: time="2025-09-29 12:39:52.756610203Z" level=info msg="Image docker.io/nginx:alpine not found" id=7b37de48-a0a4-4a5a-a720-e495bd5d39d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 12:39:52 functional-253578 crio[4197]: time="2025-09-29 12:39:52.757228229Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=82f2267a-7991-4f59-a9b9-5975101c738b name=/runtime.v1.ImageService/PullImage
	Sep 29 12:39:52 functional-253578 crio[4197]: time="2025-09-29 12:39:52.778682795Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	800a9e2ed1b10       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   6 minutes ago       Running             dashboard-metrics-scraper   0                   09656a76ee1e8       dashboard-metrics-scraper-77bf4d6c4c-r4lpn
	796da96136821       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         6 minutes ago       Running             kubernetes-dashboard        0                   85e69fb78158e       kubernetes-dashboard-855c9754f9-6mxlr
	e97d02e8257e4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              6 minutes ago       Exited              mount-munger                0                   5abdfdffde5eb       busybox-mount
	2f0d49822c726       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  6 minutes ago       Running             mysql                       0                   f97dad052ace4       mysql-5bb876957f-pqwk8
	a3478acb5635f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Running             storage-provisioner         2                   bc07a337d7810       storage-provisioner
	3d79e8ee7d77a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 6 minutes ago       Running             kube-apiserver              0                   96c6eddb78178       kube-apiserver-functional-253578
	0294f4a19f054       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 6 minutes ago       Running             kube-controller-manager     1                   54137805d232e       kube-controller-manager-functional-253578
	ca9a8b95e7b2b       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 6 minutes ago       Running             kube-scheduler              1                   3271bc4776308       kube-scheduler-functional-253578
	51149fc5368ee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 6 minutes ago       Running             etcd                        1                   180bb87ad7d38       etcd-functional-253578
	43c94dbe7dc9c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 7 minutes ago       Running             kube-proxy                  1                   bc40d2900b353       kube-proxy-l2tmd
	cc462900c191d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 7 minutes ago       Running             kindnet-cni                 1                   8df66ba628fdb       kindnet-dtwgc
	bb673ce7c8379       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 7 minutes ago       Exited              storage-provisioner         1                   bc07a337d7810       storage-provisioner
	7812b80a3deb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 7 minutes ago       Running             coredns                     1                   11bd7e239cb80       coredns-66bc5c9577-xhr4r
	e76fe9799d496       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 7 minutes ago       Exited              coredns                     0                   11bd7e239cb80       coredns-66bc5c9577-xhr4r
	39ccea340216b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 7 minutes ago       Exited              kindnet-cni                 0                   8df66ba628fdb       kindnet-dtwgc
	ec2a58946f464       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 7 minutes ago       Exited              kube-proxy                  0                   bc40d2900b353       kube-proxy-l2tmd
	6e30358b6c034       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 7 minutes ago       Exited              etcd                        0                   180bb87ad7d38       etcd-functional-253578
	3df4f420a021c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 7 minutes ago       Exited              kube-controller-manager     0                   54137805d232e       kube-controller-manager-functional-253578
	769e1c3d4b1a3       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 7 minutes ago       Exited              kube-scheduler              0                   3271bc4776308       kube-scheduler-functional-253578
	
	
	==> coredns [7812b80a3deb48ea0c68dfcb9b3e1e54e1ad4007f16da467c75256791dddb9a7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44221 - 14032 "HINFO IN 26357756520382258.83809730824062614. udp 53 false 512" NXDOMAIN qr,rd,ra 128 0.135411891s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e76fe9799d496dd8a613aebb5c91267f0ea188acb78d607135fc999c8ab32fff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34133 - 33964 "HINFO IN 6877996517575220631.5899426839076039489. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06863097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-253578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-253578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-253578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_32_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-253578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:40:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:39:50 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:39:50 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:39:50 +0000   Mon, 29 Sep 2025 12:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:39:50 +0000   Mon, 29 Sep 2025 12:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-253578
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdd83ba3f8234cbb86b0ac5aaf1fde4b
	  System UUID:                fac40c0a-3a47-4e74-b681-e377844533e0
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-w5d8l                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     hello-node-connect-7d85dfc575-6hlmr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-pqwk8                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m26s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-xhr4r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m46s
	  kube-system                 etcd-functional-253578                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m51s
	  kube-system                 kindnet-dtwgc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m46s
	  kube-system                 kube-apiserver-functional-253578              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-controller-manager-functional-253578     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-proxy-l2tmd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-scheduler-functional-253578              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-r4lpn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6mxlr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m45s                  kube-proxy       
	  Normal  Starting                 6m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m56s (x8 over 7m56s)  kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m56s (x8 over 7m56s)  kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s (x8 over 7m56s)  kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     7m51s                  kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m51s                  kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s                  kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m47s                  node-controller  Node functional-253578 event: Registered Node functional-253578 in Controller
	  Normal  NodeReady                7m35s                  kubelet          Node functional-253578 status is now: NodeReady
	  Normal  Starting                 6m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node functional-253578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node functional-253578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x8 over 6m55s)  kubelet          Node functional-253578 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m49s                  node-controller  Node functional-253578 event: Registered Node functional-253578 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 1d 17 83 9b cd 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 2d e6 8e 79 5a 08 06
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [51149fc5368ee4ec385247b28e8cdf01fc9ac8e065bd224750f6e07f024a4d2d] <==
	{"level":"warn","ts":"2025-09-29T12:33:42.539195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.555274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.560580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.567115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.574177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.580710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.587441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.593824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.600677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.613109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.619755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.627706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.634253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.641650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.648126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.655020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.661617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.668216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.674604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.681203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.687581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.706350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.713092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.719902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:33:42.774536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	
	
	==> etcd [6e30358b6c03466bcf50ba7fe5beca23acdec6ae3c1b08a8b64a3e41c334759d] <==
	{"level":"warn","ts":"2025-09-29T12:32:41.129472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.136475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.143028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.161245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.167567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.174906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:32:41.218897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:33:38.702710Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:33:38.702826Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-253578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:33:38.702955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:33:38.703111Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:33:38.704688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.704750Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:33:38.704814Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704781Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:33:38.704818Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704784Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704840Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:33:38.704850Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:33:38.704851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:33:38.704860Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.706723Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:33:38.706803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:33:38.706844Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:33:38.706854Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-253578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:40:35 up  2:23,  0 users,  load average: 0.20, 0.53, 1.65
	Linux functional-253578 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [39ccea340216b5df091799c6ed64c61363041c981f6c80a5bfd5f98a46b62130] <==
	I0929 12:32:50.107381       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:32:50.107731       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:32:50.107954       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:32:50.107974       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:32:50.108002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:32:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:32:50.312215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:32:50.312529       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:32:50.312546       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:32:50.312747       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:32:50.704160       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:32:50.704205       1 metrics.go:72] Registering metrics
	I0929 12:32:50.704304       1 controller.go:711] "Syncing nftables rules"
	I0929 12:33:00.313860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:00.313940       1 main.go:301] handling current node
	I0929 12:33:10.319036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:10.319095       1 main.go:301] handling current node
	I0929 12:33:20.316044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:20.316083       1 main.go:301] handling current node
	
	
	==> kindnet [cc462900c191dea9f5eb4b055d6dd9085307c3d362a6d5ad270cca39faee9ca9] <==
	I0929 12:38:29.057949       1 main.go:301] handling current node
	I0929 12:38:39.050650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:38:39.050702       1 main.go:301] handling current node
	I0929 12:38:49.048788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:38:49.048864       1 main.go:301] handling current node
	I0929 12:38:59.048906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:38:59.048952       1 main.go:301] handling current node
	I0929 12:39:09.050362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:09.050408       1 main.go:301] handling current node
	I0929 12:39:19.049268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:19.049331       1 main.go:301] handling current node
	I0929 12:39:29.050901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:29.050938       1 main.go:301] handling current node
	I0929 12:39:39.056337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:39.056397       1 main.go:301] handling current node
	I0929 12:39:49.049687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:49.049753       1 main.go:301] handling current node
	I0929 12:39:59.052347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:59.052398       1 main.go:301] handling current node
	I0929 12:40:09.052196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:09.052234       1 main.go:301] handling current node
	I0929 12:40:19.049274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:19.049315       1 main.go:301] handling current node
	I0929 12:40:29.053525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:29.053563       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d79e8ee7d77ae6fd301406c37a83f7915a059a92a270b7aee96408569007bb5] <==
	I0929 12:33:45.330773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 12:33:45.337584       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 12:33:55.020239       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 12:34:03.662467       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.7.96"}
	I0929 12:34:09.371780       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.6.69"}
	I0929 12:34:13.918911       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 12:34:14.099796       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.77.67"}
	I0929 12:34:14.117270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.120.138"}
	I0929 12:34:21.088657       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.59.227"}
	E0929 12:34:24.569999       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58244: use of closed network connection
	E0929 12:34:26.127706       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58270: use of closed network connection
	E0929 12:34:28.409332       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58294: use of closed network connection
	I0929 12:34:28.633450       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.222.117"}
	I0929 12:34:28.990679       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.134.212"}
	I0929 12:34:46.108216       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:00.319679       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:54.929530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:11.550989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:57.725010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:37:13.112021       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:07.824408       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:37.684588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:39:08.723477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:39:48.532320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:40:32.848206       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0294f4a19f054b67a348be2a2a92efbf22e14894dacb1dfc181213d62e8337ac] <==
	I0929 12:33:46.585699       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:33:46.585740       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 12:33:46.585705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 12:33:46.585778       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:33:46.585813       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:33:46.585938       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:33:46.590233       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:33:46.591488       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:33:46.598602       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:33:46.598721       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:33:46.598775       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:33:46.598787       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:33:46.598795       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:33:46.600676       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:33:46.600821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:33:46.600982       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-253578"
	I0929 12:33:46.601049       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:33:46.602777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:33:46.612069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0929 12:34:14.001797       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.013150       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.020762       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.024043       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.028366       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:34:14.031013       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [3df4f420a021c04d51d68ec55fa84a36e2614c3377933d5b153c6d24fa40dced] <==
	I0929 12:32:48.642128       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 12:32:48.642282       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 12:32:48.643376       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:32:48.643430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:32:48.643444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 12:32:48.643487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 12:32:48.643566       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 12:32:48.643568       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:32:48.643590       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:32:48.643696       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:32:48.643731       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 12:32:48.644792       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:32:48.648123       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:32:48.648172       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 12:32:48.648215       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:32:48.648293       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:32:48.648356       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:32:48.648363       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:32:48.648368       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:32:48.649376       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:32:48.655967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-253578" podCIDRs=["10.244.0.0/24"]
	I0929 12:32:48.656118       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:32:48.665028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:32:48.667261       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:33:03.594309       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [43c94dbe7dc9c929b29996b719b0dc2005bc2e69d609e0aadcde1f77030df3e3] <==
	I0929 12:33:29.707138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:33:29.708158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:30.874489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:33.132459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:33:37.265606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-253578&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:33:47.508051       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:33:47.508107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:33:47.508213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:33:47.530001       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:33:47.530082       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:33:47.536525       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:33:47.536932       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:33:47.536965       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:33:47.538421       1 config.go:200] "Starting service config controller"
	I0929 12:33:47.538453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:33:47.538490       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:33:47.538502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:33:47.538529       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:33:47.538540       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:33:47.538620       1 config.go:309] "Starting node config controller"
	I0929 12:33:47.538693       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:33:47.538705       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:33:47.638592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:33:47.638611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:33:47.638658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ec2a58946f464e34e02c47e38f8f410b15c1c019005880f5f2817933668ee71d] <==
	I0929 12:32:49.948815       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:32:50.021006       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:32:50.121941       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:32:50.121993       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:32:50.122134       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:32:50.143394       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:32:50.143467       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:32:50.149237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:32:50.149682       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:32:50.149727       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:32:50.151194       1 config.go:200] "Starting service config controller"
	I0929 12:32:50.151227       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:32:50.151236       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:32:50.151239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:32:50.151271       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:32:50.151298       1 config.go:309] "Starting node config controller"
	I0929 12:32:50.151307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:32:50.151299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:32:50.151315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:32:50.251515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:32:50.252708       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:32:50.252761       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [769e1c3d4b1a3c5c8a52f9e10b7009a65e88feb03b79a7e42c6f9446c56eba9a] <==
	E0929 12:32:41.657485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:32:41.657508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:32:41.657566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:32:41.657589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:32:42.462780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:32:42.472511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 12:32:42.476163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:32:42.513856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:32:42.533695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:32:42.688743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:32:42.716363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:32:42.723650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:32:42.731971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:32:42.742338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:32:42.838084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:32:42.882747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:32:42.929056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:32:42.952335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 12:32:44.953860       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:38.986415       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:33:38.986520       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:38.986652       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:33:38.986689       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:33:38.986730       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:33:38.986760       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca9a8b95e7b2be43ee50ad901ec3a03395f6be36fc6098feb79acdc0e4e61841] <==
	I0929 12:33:41.869362       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:33:43.166585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:33:43.166627       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:33:43.166640       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:33:43.166653       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:33:43.189698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:33:43.189729       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:33:43.191573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:43.191628       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:33:43.191848       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:33:43.192230       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:33:43.291807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:39:47 functional-253578 kubelet[5141]: E0929 12:39:47.756255    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:39:50 functional-253578 kubelet[5141]: E0929 12:39:50.822617    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149590822389522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:39:50 functional-253578 kubelet[5141]: E0929 12:39:50.822657    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149590822389522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:39:53 functional-253578 kubelet[5141]: E0929 12:39:53.756452    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:39:57 functional-253578 kubelet[5141]: E0929 12:39:57.756508    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="1728db16-21e0-452c-8f2b-5f89b8ee26af"
	Sep 29 12:40:00 functional-253578 kubelet[5141]: E0929 12:40:00.757108    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:40:00 functional-253578 kubelet[5141]: E0929 12:40:00.824380    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149600824094760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:00 functional-253578 kubelet[5141]: E0929 12:40:00.824429    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149600824094760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:04 functional-253578 kubelet[5141]: E0929 12:40:04.756812    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:40:10 functional-253578 kubelet[5141]: E0929 12:40:10.825868    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149610825586151  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:10 functional-253578 kubelet[5141]: E0929 12:40:10.825922    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149610825586151  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:11 functional-253578 kubelet[5141]: E0929 12:40:11.756341    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:40:11 functional-253578 kubelet[5141]: E0929 12:40:11.756428    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="1728db16-21e0-452c-8f2b-5f89b8ee26af"
	Sep 29 12:40:15 functional-253578 kubelet[5141]: E0929 12:40:15.757132    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:40:20 functional-253578 kubelet[5141]: E0929 12:40:20.827739    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149620827461619  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:20 functional-253578 kubelet[5141]: E0929 12:40:20.827780    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149620827461619  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:217035}  inodes_used:{value:106}}"
	Sep 29 12:40:22 functional-253578 kubelet[5141]: E0929 12:40:22.757151    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="1728db16-21e0-452c-8f2b-5f89b8ee26af"
	Sep 29 12:40:24 functional-253578 kubelet[5141]: E0929 12:40:24.112959    5141 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:40:24 functional-253578 kubelet[5141]: E0929 12:40:24.113053    5141 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:40:24 functional-253578 kubelet[5141]: E0929 12:40:24.113171    5141 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(5939da94-9f6a-4aac-8729-ac253718f1ae): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:40:24 functional-253578 kubelet[5141]: E0929 12:40:24.113225    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5939da94-9f6a-4aac-8729-ac253718f1ae"
	Sep 29 12:40:26 functional-253578 kubelet[5141]: E0929 12:40:26.756823    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-w5d8l" podUID="0321a7b2-ce0e-4317-8d5c-a5b8a569c404"
	Sep 29 12:40:29 functional-253578 kubelet[5141]: E0929 12:40:29.756598    5141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-6hlmr" podUID="7777ae1a-135e-42b9-a22c-79f2de55f788"
	Sep 29 12:40:30 functional-253578 kubelet[5141]: E0929 12:40:30.829427    5141 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759149630829190958  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	Sep 29 12:40:30 functional-253578 kubelet[5141]: E0929 12:40:30.829466    5141 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759149630829190958  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:241884}  inodes_used:{value:122}}"
	
	
	==> kubernetes-dashboard [796da9613682112cd3daed0ed8c548158760a34133e6859dcb74250de90d993b] <==
	2025/09/29 12:34:26 Using namespace: kubernetes-dashboard
	2025/09/29 12:34:26 Using in-cluster config to connect to apiserver
	2025/09/29 12:34:26 Using secret token for csrf signing
	2025/09/29 12:34:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/29 12:34:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/29 12:34:26 Successful initial request to the apiserver, version: v1.34.0
	2025/09/29 12:34:26 Generating JWE encryption key
	2025/09/29 12:34:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/29 12:34:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/29 12:34:27 Initializing JWE encryption key from synchronized object
	2025/09/29 12:34:27 Creating in-cluster Sidecar client
	2025/09/29 12:34:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/29 12:34:27 Serving insecurely on HTTP port: 9090
	2025/09/29 12:34:57 Successful request to sidecar
	2025/09/29 12:34:26 Starting overwatch
	
	
	==> storage-provisioner [a3478acb5635f3fdbaa2cb38bf008fdb1897e15c4449369d698febbcbd0e2a88] <==
	W0929 12:40:11.172798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:13.176754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:13.182576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:15.186505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:15.191486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:17.195243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:17.199642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:19.203158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:19.208115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:21.211654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:21.217256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:23.220458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:23.224954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:25.228273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:25.233238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:27.238310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:27.243277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:29.247067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:29.252161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:31.255458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:31.259767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:33.263993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:33.268587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:35.271973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:40:35.278237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bb673ce7c83793afe0efc78f0c158652713d196f225b09191dedfb61a90492d6] <==
	I0929 12:33:28.638553       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:33:28.640190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
helpers_test.go:269: (dbg) Run:  kubectl --context functional-253578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-253578 describe pod busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-253578 describe pod busybox-mount hello-node-75c85bcc94-w5d8l hello-node-connect-7d85dfc575-6hlmr nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e97d02e8257e46a67f3c8b70df3faf7aeb8423ebdf5d4bda3b5cb61ab7984e11
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 12:34:21 +0000
	      Finished:     Mon, 29 Sep 2025 12:34:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blpbg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-blpbg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m24s  default-scheduler  Successfully assigned default/busybox-mount to functional-253578
	  Normal  Pulling    6m24s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.21s (8.72s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m15s  kubelet            Created container: mount-munger
	  Normal  Started    6m15s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-w5d8l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5crlj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5crlj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w5d8l to functional-253578
	  Normal   Pulling    94s (x5 over 6m7s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     93s (x5 over 5m34s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     93s (x5 over 5m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     25s (x16 over 5m33s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x17 over 5m33s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-6hlmr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fhrx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7fhrx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6hlmr to functional-253578
	  Normal   Pulling    85s (x5 over 6m8s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     85s (x5 over 5m34s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     85s (x5 over 5m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     21s (x16 over 5m33s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    7s (x17 over 5m33s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxhbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxhbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m15s                default-scheduler  Successfully assigned default/nginx-svc to functional-253578
	  Warning  Failed     5m34s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    44s (x5 over 6m15s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x5 over 5m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s (x4 over 4m32s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    0s (x11 over 5m33s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     0s (x11 over 5m33s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-253578/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:34:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkqbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fkqbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-253578
	  Warning  Failed     93s (x4 over 5m3s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     93s (x4 over 5m3s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x11 over 5m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     14s (x11 over 5m2s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    0s (x5 over 6m2s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
E0929 12:43:15.385736  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-253578 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5939da94-9f6a-4aac-8729-ac253718f1ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-29 12:38:21.422628037 +0000 UTC m=+771.722664505
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-253578 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-253578 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-253578/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:34:21 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxhbh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lxhbh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-253578
Warning  Failed     3m19s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     74s (x3 over 3m19s)  kubelet            Error: ErrImagePull
Warning  Failed     74s (x2 over 2m17s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    36s (x5 over 3m18s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     36s (x5 over 3m18s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    21s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-253578 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-253578 logs nginx-svc -n default: exit status 1 (72.637748ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-253578 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-253578 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-253578 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-w5d8l" [0321a7b2-ce0e-4317-8d5c-a5b8a569c404] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-253578 -n functional-253578
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 12:44:29.347456663 +0000 UTC m=+1139.647493121
functional_test.go:1460: (dbg) Run:  kubectl --context functional-253578 describe po hello-node-75c85bcc94-w5d8l -n default
functional_test.go:1460: (dbg) kubectl --context functional-253578 describe po hello-node-75c85bcc94-w5d8l -n default:
Name:             hello-node-75c85bcc94-w5d8l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-253578/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:34:28 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5crlj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5crlj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w5d8l to functional-253578
Normal   Pulling    5m27s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     5m26s (x5 over 9m27s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     5m26s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed     4m18s (x16 over 9m26s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m8s (x21 over 9m26s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-253578 logs hello-node-75c85bcc94-w5d8l -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-253578 logs hello-node-75c85bcc94-w5d8l -n default: exit status 1 (75.110896ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-w5d8l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-253578 logs hello-node-75c85bcc94-w5d8l -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0929 12:38:21.568690  567516 retry.go:31] will retry after 3.23502249s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:38:24.804392  567516 retry.go:31] will retry after 6.67678764s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:38:31.481661  567516 retry.go:31] will retry after 3.744993445s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:38:35.227864  567516 retry.go:31] will retry after 14.55220258s: Temporary Error: Get "http:": http: no Host in request URL
E0929 12:38:43.088451  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0929 12:38:49.780760  567516 retry.go:31] will retry after 21.86141904s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:39:11.642297  567516 retry.go:31] will retry after 20.487496708s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:39:32.130093  567516 retry.go:31] will retry after 45.489699338s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-253578 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.98.59.227   10.98.59.227   80:31818/TCP   5m56s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 service --namespace=default --https --url hello-node: exit status 115 (542.446389ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31945
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-253578 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 service hello-node --url --format={{.IP}}: exit status 115 (540.621422ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-253578 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 service hello-node --url: exit status 115 (540.484032ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31945
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-253578 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31945
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gg4cr" [2a3f7370-a761-486c-993f-c0a0cc93ce6b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:20:25.648772428 +0000 UTC m=+3295.948808898
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-223488 describe po kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-223488 describe po kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-gg4cr
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-223488/192.168.94.2
Start Time:       Mon, 29 Sep 2025 13:10:58 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-79dc7 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-79dc7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m27s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr to old-k8s-version-223488
Warning  Failed     5m48s (x4 over 8m53s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m48s (x4 over 8m53s)  kubelet            Error: ErrImagePull
Warning  Failed     5m33s (x6 over 8m52s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5m18s (x7 over 8m52s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal   Pulling    4m21s (x5 over 9m27s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard: exit status 1 (79.334024ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-gg4cr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-223488
helpers_test.go:243: (dbg) docker inspect old-k8s-version-223488:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904",
	        "Created": "2025-09-29T13:09:18.577569114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 813376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:35.282676032Z",
	            "FinishedAt": "2025-09-29T13:10:34.395923319Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/hosts",
	        "LogPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904-json.log",
	        "Name": "/old-k8s-version-223488",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-223488:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-223488",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904",
	                "LowerDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-223488",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-223488/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-223488",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-223488",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-223488",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "831633c4715d6c4bb04097bcb43d90ab4f6a106af6efe72c1c46f36eb63bc030",
	            "SandboxKey": "/var/run/docker/netns/831633c4715d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-223488": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:19:9f:5d:a3:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0dd989a98e4be35ca09f4ad5f694ef2de11803caf0660ddd0b7a2a4c2c63ef6",
	                    "EndpointID": "17627954c891213a4a0f5121dd2871d4598ada8665af8f95f340b5597fe506d2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-223488",
	                        "3c4f9dce81a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223488 -n old-k8s-version-223488
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223488 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-223488 logs -n 25: (1.334930917s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-223488 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:14:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:14:01.801416  839515 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:14:01.801548  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801557  839515 out.go:374] Setting ErrFile to fd 2...
	I0929 13:14:01.801561  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801790  839515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:14:01.802369  839515 out.go:368] Setting JSON to false
	I0929 13:14:01.803835  839515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10587,"bootTime":1759141055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:14:01.803980  839515 start.go:140] virtualization: kvm guest
	I0929 13:14:01.806446  839515 out.go:179] * [default-k8s-diff-port-504443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:14:01.808471  839515 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:14:01.808488  839515 notify.go:220] Checking for updates...
	I0929 13:14:01.811422  839515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:14:01.813137  839515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:01.815358  839515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:14:01.817089  839515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:14:01.818747  839515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:14:01.820859  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:01.821367  839515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:14:01.850294  839515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:14:01.850496  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:01.920086  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.906779425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:01.920249  839515 docker.go:318] overlay module found
	I0929 13:14:01.923199  839515 out.go:179] * Using the docker driver based on existing profile
	I0929 13:14:01.924580  839515 start.go:304] selected driver: docker
	I0929 13:14:01.924604  839515 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:01.924742  839515 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:14:01.925594  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:02.004135  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.989084501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:02.004575  839515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:02.004635  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:02.004699  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:02.004749  839515 start.go:348] cluster config:
	{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:02.006556  839515 out.go:179] * Starting "default-k8s-diff-port-504443" primary control-plane node in "default-k8s-diff-port-504443" cluster
	I0929 13:14:02.007837  839515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:14:02.009404  839515 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:14:02.011260  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:02.011353  839515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:14:02.011371  839515 cache.go:58] Caching tarball of preloaded images
	I0929 13:14:02.011418  839515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:14:02.011589  839515 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:14:02.011606  839515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:14:02.011761  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.040696  839515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:14:02.040723  839515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:14:02.040747  839515 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:14:02.040778  839515 start.go:360] acquireMachinesLock for default-k8s-diff-port-504443: {Name:mkd1504d0afcb57e7e3a7d375c0d3d045f6ff0f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:14:02.040840  839515 start.go:364] duration metric: took 41.435µs to acquireMachinesLock for "default-k8s-diff-port-504443"
	I0929 13:14:02.040859  839515 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:14:02.040866  839515 fix.go:54] fixHost starting: 
	I0929 13:14:02.041151  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.065452  839515 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504443: state=Stopped err=<nil>
	W0929 13:14:02.065493  839515 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:14:00.890602  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:00.890614  837560 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:00.890670  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.892229  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:00.892253  837560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:00.892339  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.932762  837560 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:00.932828  837560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:00.932989  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.934137  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.945316  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.948654  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.961271  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:01.034193  837560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:01.056199  837560 node_ready.go:35] waiting up to 6m0s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:01.062352  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:01.074784  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:01.074816  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:01.080006  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:01.080035  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:01.096572  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:01.107273  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:01.107304  837560 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:01.123628  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:01.123736  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:01.159235  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.159267  837560 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:01.162841  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:01.163496  837560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:01.197386  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.198337  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:01.198359  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:01.226863  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:01.226900  837560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:01.252970  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:01.252998  837560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:01.278501  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:01.278527  837560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:01.303325  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:01.303366  837560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:01.329503  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:01.329532  837560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:01.353791  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:03.007947  837560 node_ready.go:49] node "embed-certs-144376" is "Ready"
	I0929 13:14:03.007988  837560 node_ready.go:38] duration metric: took 1.951746003s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:03.008006  837560 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:03.008068  837560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:03.686627  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.624233175s)
	I0929 13:14:03.686706  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.590098715s)
	I0929 13:14:03.686993  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489568477s)
	I0929 13:14:03.687027  837560 addons.go:479] Verifying addon metrics-server=true in "embed-certs-144376"
	I0929 13:14:03.687147  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.333304219s)
	I0929 13:14:03.687396  837560 api_server.go:72] duration metric: took 2.840723243s to wait for apiserver process to appear ...
	I0929 13:14:03.687413  837560 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:03.687434  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:03.689946  837560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-144376 addons enable metrics-server
	
	I0929 13:14:03.693918  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:03.693955  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:03.703949  837560 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:14:02.067503  839515 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-504443" ...
	I0929 13:14:02.067595  839515 cli_runner.go:164] Run: docker start default-k8s-diff-port-504443
	I0929 13:14:02.400205  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.426021  839515 kic.go:430] container "default-k8s-diff-port-504443" state is running.
	I0929 13:14:02.426697  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:02.452245  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.452576  839515 machine.go:93] provisionDockerMachine start ...
	I0929 13:14:02.452686  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:02.476313  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:02.476569  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:02.476592  839515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:14:02.477420  839515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45360->127.0.0.1:33463: read: connection reset by peer
	I0929 13:14:05.620847  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.620906  839515 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-504443"
	I0929 13:14:05.621012  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.641909  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.642258  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.642275  839515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504443 && echo "default-k8s-diff-port-504443" | sudo tee /etc/hostname
	I0929 13:14:05.804833  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.804936  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.826632  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.826863  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.826904  839515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504443/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:14:05.968467  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:14:05.968502  839515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:14:05.968535  839515 ubuntu.go:190] setting up certificates
	I0929 13:14:05.968548  839515 provision.go:84] configureAuth start
	I0929 13:14:05.968610  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:05.988690  839515 provision.go:143] copyHostCerts
	I0929 13:14:05.988763  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:14:05.988788  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:14:05.988904  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:14:05.989039  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:14:05.989049  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:14:05.989082  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:14:05.989162  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:14:05.989170  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:14:05.989196  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:14:05.989339  839515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504443 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-504443 localhost minikube]
	I0929 13:14:06.185911  839515 provision.go:177] copyRemoteCerts
	I0929 13:14:06.185989  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:14:06.186098  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.205790  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:06.309505  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:14:06.340444  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:14:06.372277  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:14:06.402506  839515 provision.go:87] duration metric: took 433.943194ms to configureAuth
	I0929 13:14:06.402539  839515 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:14:06.402765  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:06.402931  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.424941  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:06.425216  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:06.425243  839515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:14:06.741449  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:14:06.741480  839515 machine.go:96] duration metric: took 4.288878167s to provisionDockerMachine
	I0929 13:14:06.741495  839515 start.go:293] postStartSetup for "default-k8s-diff-port-504443" (driver="docker")
	I0929 13:14:06.741509  839515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:14:06.741575  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:14:06.741626  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.764273  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:03.706436  837560 addons.go:514] duration metric: took 2.859616556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:04.188145  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.194079  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.194114  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:04.687754  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.692514  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.692547  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.188198  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.193003  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:05.193033  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.687682  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.692821  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:14:05.694070  837560 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:05.694103  837560 api_server.go:131] duration metric: took 2.006683698s to wait for apiserver health ...
	I0929 13:14:05.694113  837560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:05.699584  837560 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:05.699638  837560 system_pods.go:61] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.699655  837560 system_pods.go:61] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.699667  837560 system_pods.go:61] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.699676  837560 system_pods.go:61] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.699687  837560 system_pods.go:61] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.699697  837560 system_pods.go:61] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.699711  837560 system_pods.go:61] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.699721  837560 system_pods.go:61] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.699734  837560 system_pods.go:61] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.699743  837560 system_pods.go:74] duration metric: took 5.622791ms to wait for pod list to return data ...
	I0929 13:14:05.699757  837560 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:05.703100  837560 default_sa.go:45] found service account: "default"
	I0929 13:14:05.703127  837560 default_sa.go:55] duration metric: took 3.363521ms for default service account to be created ...
	I0929 13:14:05.703137  837560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:05.712514  837560 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:05.712559  837560 system_pods.go:89] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.712571  837560 system_pods.go:89] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.712579  837560 system_pods.go:89] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.712592  837560 system_pods.go:89] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.712601  837560 system_pods.go:89] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.712614  837560 system_pods.go:89] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.712629  837560 system_pods.go:89] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.712643  837560 system_pods.go:89] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.712648  837560 system_pods.go:89] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.712659  837560 system_pods.go:126] duration metric: took 9.514361ms to wait for k8s-apps to be running ...
	I0929 13:14:05.712669  837560 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:05.712730  837560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:05.733971  837560 system_svc.go:56] duration metric: took 21.287495ms WaitForService to wait for kubelet
	I0929 13:14:05.734004  837560 kubeadm.go:578] duration metric: took 4.887332987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:05.734047  837560 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:05.737599  837560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:05.737632  837560 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:05.737645  837560 node_conditions.go:105] duration metric: took 3.59217ms to run NodePressure ...
	I0929 13:14:05.737660  837560 start.go:241] waiting for startup goroutines ...
	I0929 13:14:05.737667  837560 start.go:246] waiting for cluster config update ...
	I0929 13:14:05.737679  837560 start.go:255] writing updated cluster config ...
	I0929 13:14:05.738043  837560 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:05.743175  837560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:05.747563  837560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:07.753718  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:06.865904  839515 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:14:06.869732  839515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:14:06.869776  839515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:14:06.869789  839515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:14:06.869797  839515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:14:06.869820  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:14:06.869914  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:14:06.870040  839515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:14:06.870152  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:14:06.881041  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:06.910664  839515 start.go:296] duration metric: took 169.149248ms for postStartSetup
	I0929 13:14:06.910763  839515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:14:06.910806  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.930467  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.026128  839515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:14:07.031766  839515 fix.go:56] duration metric: took 4.990890676s for fixHost
	I0929 13:14:07.031793  839515 start.go:83] releasing machines lock for "default-k8s-diff-port-504443", held for 4.990942592s
	I0929 13:14:07.031878  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:07.050982  839515 ssh_runner.go:195] Run: cat /version.json
	I0929 13:14:07.051039  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.051090  839515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:14:07.051158  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.072609  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.072906  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.245633  839515 ssh_runner.go:195] Run: systemctl --version
	I0929 13:14:07.251713  839515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:14:07.405376  839515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:14:07.412347  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.424730  839515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:14:07.424820  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.436822  839515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:14:07.436852  839515 start.go:495] detecting cgroup driver to use...
	I0929 13:14:07.436922  839515 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:14:07.437079  839515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:14:07.451837  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:14:07.466730  839515 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:14:07.466785  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:14:07.482295  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:14:07.497182  839515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:14:07.573510  839515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:14:07.647720  839515 docker.go:234] disabling docker service ...
	I0929 13:14:07.647793  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:14:07.663956  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:14:07.678340  839515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:14:07.749850  839515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:14:07.833138  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:14:07.847332  839515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:14:07.869460  839515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:14:07.869534  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.882223  839515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:14:07.882304  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.895125  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.908850  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.925290  839515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:14:07.942174  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.956313  839515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.970510  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.984185  839515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:14:07.995199  839515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:14:08.006273  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.079146  839515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:14:08.201036  839515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:14:08.201135  839515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:14:08.205983  839515 start.go:563] Will wait 60s for crictl version
	I0929 13:14:08.206058  839515 ssh_runner.go:195] Run: which crictl
	I0929 13:14:08.210186  839515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:14:08.251430  839515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:14:08.251529  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.296851  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.339448  839515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:14:08.341414  839515 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-504443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:14:08.362344  839515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:14:08.367546  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.381721  839515 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:14:08.381862  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:08.381951  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.433062  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.433096  839515 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:14:08.433161  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.473938  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.473972  839515 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:14:08.473983  839515 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0929 13:14:08.474084  839515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-504443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:14:08.474149  839515 ssh_runner.go:195] Run: crio config
	I0929 13:14:08.535858  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:08.535928  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:08.535954  839515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:14:08.535987  839515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504443 NodeName:default-k8s-diff-port-504443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:14:08.536149  839515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504443"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:14:08.536221  839515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:14:08.549875  839515 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:14:08.549968  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:14:08.562591  839515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 13:14:08.588448  839515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:14:08.613818  839515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 13:14:08.637842  839515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:14:08.642571  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.658613  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.742685  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:08.769381  839515 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443 for IP: 192.168.76.2
	I0929 13:14:08.769408  839515 certs.go:194] generating shared ca certs ...
	I0929 13:14:08.769432  839515 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:08.769610  839515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:14:08.769690  839515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:14:08.769707  839515 certs.go:256] generating profile certs ...
	I0929 13:14:08.769830  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.key
	I0929 13:14:08.769913  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key.3fc9c8d4
	I0929 13:14:08.769963  839515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key
	I0929 13:14:08.770120  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:14:08.770170  839515 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:14:08.770186  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:14:08.770222  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:14:08.770264  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:14:08.770297  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:14:08.770375  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:08.771164  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:14:08.810187  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:14:08.852550  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:14:08.909671  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:14:08.944558  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:14:08.979658  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:14:09.015199  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:14:09.050930  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:14:09.086524  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:14:09.119207  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:14:09.151483  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:14:09.186734  839515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:14:09.211662  839515 ssh_runner.go:195] Run: openssl version
	I0929 13:14:09.219872  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:14:09.232974  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237506  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237581  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.247699  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:14:09.262697  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:14:09.277818  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283413  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283551  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.293753  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:14:09.307826  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:14:09.322785  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328680  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328758  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.337578  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:14:09.349565  839515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:14:09.355212  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:14:09.365031  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:14:09.376499  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:14:09.386571  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:14:09.396193  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:14:09.405722  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:14:09.416490  839515 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:09.416619  839515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:14:09.416692  839515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:14:09.480165  839515 cri.go:89] found id: ""
	I0929 13:14:09.480329  839515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:14:09.502356  839515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:14:09.502385  839515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:14:09.502465  839515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:14:09.516584  839515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:14:09.517974  839515 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-504443" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.518950  839515 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-564029/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-504443" cluster setting kubeconfig missing "default-k8s-diff-port-504443" context setting]
	I0929 13:14:09.520381  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.523350  839515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:14:09.540146  839515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:14:09.540271  839515 kubeadm.go:593] duration metric: took 37.87462ms to restartPrimaryControlPlane
	I0929 13:14:09.540292  839515 kubeadm.go:394] duration metric: took 123.821391ms to StartCluster
	I0929 13:14:09.540318  839515 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.540461  839515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.543243  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.543701  839515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:14:09.543964  839515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:14:09.544056  839515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544105  839515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544134  839515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504443"
	I0929 13:14:09.544215  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:09.544297  839515 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544313  839515 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544323  839515 addons.go:247] addon dashboard should already be in state true
	I0929 13:14:09.544356  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544499  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.544580  839515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544601  839515 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544610  839515 addons.go:247] addon metrics-server should already be in state true
	I0929 13:14:09.544638  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544779  839515 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544826  839515 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:14:09.544867  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544923  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545131  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545706  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.546905  839515 out.go:179] * Verifying Kubernetes components...
	I0929 13:14:09.548849  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:09.588222  839515 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.588254  839515 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:14:09.588394  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.589235  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.591356  839515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:14:09.592899  839515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.592920  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:14:09.592997  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.599097  839515 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:14:09.603537  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:09.603567  839515 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:09.603641  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.623364  839515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:14:09.625378  839515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:14:09.626964  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:09.626991  839515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:09.627087  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.646947  839515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.647072  839515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:09.647170  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.657171  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.660429  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.682698  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.694425  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.758623  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:09.782535  839515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:09.796122  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.824319  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:09.824349  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:09.831248  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:09.831269  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:09.857539  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.865401  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:09.865601  839515 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:09.868433  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:09.868454  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:09.911818  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.911849  839515 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:09.919662  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:09.919693  839515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:09.945916  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.956819  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:09.956847  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:09.983049  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:09.983088  839515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:10.008150  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:10.008187  839515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:10.035225  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:10.035255  839515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:10.063000  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:10.063033  839515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:10.088151  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:10.088182  839515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:10.111599  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:12.055468  839515 node_ready.go:49] node "default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:12.055507  839515 node_ready.go:38] duration metric: took 2.272916493s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:12.055524  839515 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:12.055588  839515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:12.693113  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.896952632s)
	I0929 13:14:12.693205  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.835545565s)
	I0929 13:14:12.693264  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.747320981s)
	I0929 13:14:12.693289  839515 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-504443"
	I0929 13:14:12.693401  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.581752595s)
	I0929 13:14:12.693437  839515 api_server.go:72] duration metric: took 3.149694543s to wait for apiserver process to appear ...
	I0929 13:14:12.693448  839515 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:12.693465  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:12.695374  839515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-504443 addons enable metrics-server
	
	I0929 13:14:12.698283  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:12.698311  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:12.701668  839515 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0929 13:14:09.762777  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:12.254708  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:12.703272  839515 addons.go:514] duration metric: took 3.159290714s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:13.194062  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.199962  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.200005  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:13.693647  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.699173  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.699207  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:14.193661  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:14.198386  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:14:14.199540  839515 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:14.199566  839515 api_server.go:131] duration metric: took 1.506111317s to wait for apiserver health ...
	I0929 13:14:14.199576  839515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:14.203404  839515 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:14.203444  839515 system_pods.go:61] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.203452  839515 system_pods.go:61] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.203458  839515 system_pods.go:61] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.203465  839515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.203471  839515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.203482  839515 system_pods.go:61] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.203495  839515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.203503  839515 system_pods.go:61] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.203512  839515 system_pods.go:61] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.203520  839515 system_pods.go:74] duration metric: took 3.93835ms to wait for pod list to return data ...
	I0929 13:14:14.203531  839515 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:14.206279  839515 default_sa.go:45] found service account: "default"
	I0929 13:14:14.206304  839515 default_sa.go:55] duration metric: took 2.763244ms for default service account to be created ...
	I0929 13:14:14.206315  839515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:14.209977  839515 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:14.210027  839515 system_pods.go:89] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.210040  839515 system_pods.go:89] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.210048  839515 system_pods.go:89] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.210057  839515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.210066  839515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.210073  839515 system_pods.go:89] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.210082  839515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.210089  839515 system_pods.go:89] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.210121  839515 system_pods.go:89] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.210130  839515 system_pods.go:126] duration metric: took 3.808134ms to wait for k8s-apps to be running ...
	I0929 13:14:14.210140  839515 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:14.210201  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:14.225164  839515 system_svc.go:56] duration metric: took 15.009784ms WaitForService to wait for kubelet
	I0929 13:14:14.225205  839515 kubeadm.go:578] duration metric: took 4.681459973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:14.225249  839515 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:14.228249  839515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:14.228290  839515 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:14.228307  839515 node_conditions.go:105] duration metric: took 3.048343ms to run NodePressure ...
	I0929 13:14:14.228326  839515 start.go:241] waiting for startup goroutines ...
	I0929 13:14:14.228336  839515 start.go:246] waiting for cluster config update ...
	I0929 13:14:14.228350  839515 start.go:255] writing updated cluster config ...
	I0929 13:14:14.228612  839515 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:14.233754  839515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:14.238169  839515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:16.244346  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:14.257696  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:16.754720  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:18.244963  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:20.245434  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:19.254143  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:21.754181  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:22.245771  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:24.743982  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:26.745001  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:23.755533  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:26.254152  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:29.244352  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:31.244535  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:28.753653  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:30.754009  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:33.744429  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:35.745000  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:33.254079  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:35.753251  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:37.754125  837560 pod_ready.go:94] pod "coredns-66bc5c9577-vrkvb" is "Ready"
	I0929 13:14:37.754153  837560 pod_ready.go:86] duration metric: took 32.006559006s for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.757295  837560 pod_ready.go:83] waiting for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.762511  837560 pod_ready.go:94] pod "etcd-embed-certs-144376" is "Ready"
	I0929 13:14:37.762543  837560 pod_ready.go:86] duration metric: took 5.214008ms for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.765205  837560 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.769732  837560 pod_ready.go:94] pod "kube-apiserver-embed-certs-144376" is "Ready"
	I0929 13:14:37.769763  837560 pod_ready.go:86] duration metric: took 4.5304ms for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.772045  837560 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.952582  837560 pod_ready.go:94] pod "kube-controller-manager-embed-certs-144376" is "Ready"
	I0929 13:14:37.952613  837560 pod_ready.go:86] duration metric: took 180.54484ms for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.152075  837560 pod_ready.go:83] waiting for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.552510  837560 pod_ready.go:94] pod "kube-proxy-bdkrl" is "Ready"
	I0929 13:14:38.552543  837560 pod_ready.go:86] duration metric: took 400.438224ms for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.751930  837560 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152918  837560 pod_ready.go:94] pod "kube-scheduler-embed-certs-144376" is "Ready"
	I0929 13:14:39.152978  837560 pod_ready.go:86] duration metric: took 401.010043ms for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152998  837560 pod_ready.go:40] duration metric: took 33.409779031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:39.200854  837560 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:39.202814  837560 out.go:179] * Done! kubectl is now configured to use "embed-certs-144376" cluster and "default" namespace by default
	W0929 13:14:38.244646  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:40.745094  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:43.243922  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:45.744130  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	I0929 13:14:46.743671  839515 pod_ready.go:94] pod "coredns-66bc5c9577-prpff" is "Ready"
	I0929 13:14:46.743700  839515 pod_ready.go:86] duration metric: took 32.505501945s for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.746421  839515 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.752034  839515 pod_ready.go:94] pod "etcd-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.752061  839515 pod_ready.go:86] duration metric: took 5.610516ms for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.754137  839515 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.758705  839515 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.758739  839515 pod_ready.go:86] duration metric: took 4.576444ms for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.761180  839515 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.941521  839515 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.941552  839515 pod_ready.go:86] duration metric: took 180.339824ms for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.141974  839515 pod_ready.go:83] waiting for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.541782  839515 pod_ready.go:94] pod "kube-proxy-vcsfr" is "Ready"
	I0929 13:14:47.541812  839515 pod_ready.go:86] duration metric: took 399.809326ms for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.742034  839515 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142534  839515 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:48.142565  839515 pod_ready.go:86] duration metric: took 400.492621ms for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142578  839515 pod_ready.go:40] duration metric: took 33.908786928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:48.192681  839515 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:48.194961  839515 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-504443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 13:18:52 old-k8s-version-223488 crio[563]: time="2025-09-29 13:18:52.141108432Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1f3f8cbc-9dc1-4a52-a800-ed900944ebfe name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:00 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:00.140043524Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5bde4cde-cac2-4798-b1ca-25b3a9542353 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:00 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:00.140311653Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5bde4cde-cac2-4798-b1ca-25b3a9542353 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:03 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:03.139877211Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d019b9f2-aee0-4dd1-8caa-c693b9414ba4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:03 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:03.140177595Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d019b9f2-aee0-4dd1-8caa-c693b9414ba4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:11 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:11.140386721Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e82264d3-ccbb-4dca-bf12-013625ab6577 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:11 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:11.140676945Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e82264d3-ccbb-4dca-bf12-013625ab6577 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:16 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:16.140527373Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9b65ef4e-7547-4101-ac37-4b5fe819b4f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:16 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:16.140934701Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9b65ef4e-7547-4101-ac37-4b5fe819b4f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:16 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:16.141471132Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3bd5e97d-57ef-4de3-836b-c02c4c7127dc name=/runtime.v1.ImageService/PullImage
	Sep 29 13:19:16 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:16.152738702Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:19:22 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:22.140168666Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ef09070d-54fb-4adb-a256-ebf74cc56eb1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:22 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:22.140600291Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ef09070d-54fb-4adb-a256-ebf74cc56eb1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:36 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:36.140308635Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1466e35d-d5d3-434c-bd24-a0f845dae351 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:36 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:36.140629745Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1466e35d-d5d3-434c-bd24-a0f845dae351 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:51 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:51.140114414Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1610dac3-1d3c-4188-b931-887c8873b103 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:51 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:51.140380577Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1610dac3-1d3c-4188-b931-887c8873b103 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:58 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:58.140198590Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0d201106-5bc0-42c3-9c49-c17f264cbb6b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:58 old-k8s-version-223488 crio[563]: time="2025-09-29 13:19:58.140585196Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0d201106-5bc0-42c3-9c49-c17f264cbb6b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:03 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:03.140661417Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e2101514-3114-4f89-b4fd-dac52e3e5a84 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:03 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:03.141023677Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e2101514-3114-4f89-b4fd-dac52e3e5a84 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:12 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:12.139731106Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8b378ccc-6d20-4d23-9d7e-686df25d0974 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:12 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:12.140102417Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8b378ccc-6d20-4d23-9d7e-686df25d0974 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:14 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:14.139809855Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0605a423-56a4-484c-9a73-27f44fede6e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:14 old-k8s-version-223488 crio[563]: time="2025-09-29 13:20:14.140134719Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0605a423-56a4-484c-9a73-27f44fede6e4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d83afa17b6651       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   9fc695ba94981       dashboard-metrics-scraper-5f989dc9cf-sm4lt
	f1080a53e734e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   2c08df618ae22       storage-provisioner
	f7374d71ac076       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                     1                   523b630c4c13e       coredns-5dd5756b68-w7p64
	48a60cedea0d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   ac9a2dac72f9b       kindnet-gkh8l
	f6464328e5ed7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   b139267d22cdd       busybox
	1980694c9b731       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   9 minutes ago       Running             kube-proxy                  1                   3d3e7a8c7ffaa       kube-proxy-fmnl8
	6350254ce867f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   2c08df618ae22       storage-provisioner
	b0fcfda364a2d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   9 minutes ago       Running             kube-scheduler              1                   d9471f2448ce1       kube-scheduler-old-k8s-version-223488
	d2acbb48a2ad1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   9 minutes ago       Running             kube-apiserver              1                   46df369160c5a       kube-apiserver-old-k8s-version-223488
	b89ec95aa6412       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   9 minutes ago       Running             kube-controller-manager     1                   ad68a2a621148       kube-controller-manager-old-k8s-version-223488
	e1bbb3fe053d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                        1                   a2c80dc375458       etcd-old-k8s-version-223488
	
	
	==> coredns [f7374d71ac076a422f15d1fc4ac423e11d8d7d2f4314badc06d726747cad9a7f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39068 - 31847 "HINFO IN 3740510856147808050.6485710210283806308. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119457697s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-223488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-223488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-223488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_09_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:09:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-223488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:16:23 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:16:23 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:16:23 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:16:23 +0000   Mon, 29 Sep 2025 13:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-223488
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d95519c32a0a4fb19ce38cab34beaac2
	  System UUID:                41eac839-6b1b-4b6d-a6a7-9ab802ae2f2e
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-w7p64                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-223488                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-gkh8l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-223488             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-223488    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fmnl8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-223488             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-cmxv5                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-sm4lt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gg4cr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-223488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-223488 event: Registered Node old-k8s-version-223488 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-223488 status is now: NodeReady
	  Normal  Starting                 9m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m45s (x8 over 9m45s)  kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m45s (x8 over 9m45s)  kubelet          Node old-k8s-version-223488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m45s (x8 over 9m45s)  kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m29s                  node-controller  Node old-k8s-version-223488 event: Registered Node old-k8s-version-223488 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [e1bbb3fe053d4f6b4672b4f29700db930fe370ee31d7bbd99763468fba15c2de] <==
	{"level":"info","ts":"2025-09-29T13:10:43.034697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-09-29T13:10:43.034853Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-09-29T13:10:43.036244Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:10:43.036996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:10:43.038281Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T13:10:43.038556Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T13:10:43.038599Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T13:10:43.03866Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T13:10:43.038671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T13:10:44.317029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.317112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.317142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.31715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.318827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-223488 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:10:44.318827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:10:44.318854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:10:44.319143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:10:44.319208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:10:44.320134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:10:44.320185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-09-29T13:12:17.248134Z","caller":"traceutil/trace.go:171","msg":"trace[739949258] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"119.124427ms","start":"2025-09-29T13:12:17.128988Z","end":"2025-09-29T13:12:17.248113Z","steps":["trace[739949258] 'process raft request'  (duration: 118.985322ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:57.297407Z","caller":"traceutil/trace.go:171","msg":"trace[513510388] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"133.00081ms","start":"2025-09-29T13:12:57.164383Z","end":"2025-09-29T13:12:57.297384Z","steps":["trace[513510388] 'process raft request'  (duration: 132.872634ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:20:27 up  3:02,  0 users,  load average: 0.36, 1.13, 1.68
	Linux old-k8s-version-223488 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [48a60cedea0d6dfd8c26c7fd40cd1a47fd53d4c52182ef59bc3979173acb1ce5] <==
	I0929 13:18:27.119181       1 main.go:301] handling current node
	I0929 13:18:37.126013       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:37.126053       1 main.go:301] handling current node
	I0929 13:18:47.126189       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:47.126232       1 main.go:301] handling current node
	I0929 13:18:57.119750       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:57.119798       1 main.go:301] handling current node
	I0929 13:19:07.122256       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:07.122315       1 main.go:301] handling current node
	I0929 13:19:17.123999       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:17.124035       1 main.go:301] handling current node
	I0929 13:19:27.119509       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:27.119544       1 main.go:301] handling current node
	I0929 13:19:37.126033       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:37.126068       1 main.go:301] handling current node
	I0929 13:19:47.123971       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:47.124038       1 main.go:301] handling current node
	I0929 13:19:57.124389       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:57.124435       1 main.go:301] handling current node
	I0929 13:20:07.124948       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:20:07.124997       1 main.go:301] handling current node
	I0929 13:20:17.121278       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:20:17.121315       1 main.go:301] handling current node
	I0929 13:20:27.119020       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:20:27.119066       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2acbb48a2ad1b4f139989bbd165ed93cf360d3f6a8d47fbf90f6b4a2c7fbd8b] <==
	E0929 13:18:15.377455       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:18:25.377756       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:18:35.378624       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I0929 13:18:45.241700       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.162.8:443: connect: connection refused
	I0929 13:18:45.241729       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 13:18:45.378993       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	W0929 13:18:46.307252       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:18:46.307361       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:18:46.307373       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:18:46.307459       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:18:46.307497       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:18:46.309383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 13:18:55.379332       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:19:05.380294       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:19:15.381432       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:19:25.381849       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:19:35.382301       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0929 13:19:45.242483       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.162.8:443: connect: connection refused
	I0929 13:19:45.242508       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 13:19:45.382962       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:19:55.384036       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:20:05.384815       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:20:15.385307       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:20:25.385690       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b89ec95aa641221a0461d0e0054bb6c82a40de4a33a7c7065c53c2891f6e4f18] <==
	I0929 13:15:58.924545       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:16:27.953021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="145.38µs"
	E0929 13:16:28.482542       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:16:28.932261       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:16:28.951482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.025µs"
	I0929 13:16:48.151289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.671µs"
	E0929 13:16:58.487821       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:16:58.940071       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:17:03.150839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="117.863µs"
	E0929 13:17:28.492803       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:17:28.948481       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:17:45.151338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="131.789µs"
	E0929 13:17:58.498023       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:17:58.955760       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:17:59.151538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="124.02µs"
	E0929 13:18:28.502440       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:18:28.963441       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:18:58.507692       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:18:58.971926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:19:28.512666       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:19:28.979396       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:19:58.150991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.992µs"
	E0929 13:19:58.517115       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:19:58.986795       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:20:12.150635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="109.034µs"
	
	
	==> kube-proxy [1980694c9b7313a14cd5c4651f5cb23afa10cecec355a61371114306fbc630ef] <==
	I0929 13:10:46.717842       1 server_others.go:69] "Using iptables proxy"
	I0929 13:10:46.727581       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0929 13:10:46.748040       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:10:46.750659       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:10:46.750695       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:10:46.750701       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:10:46.750733       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:10:46.751014       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:10:46.751034       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:46.751704       1 config.go:188] "Starting service config controller"
	I0929 13:10:46.751734       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:10:46.751734       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:10:46.751752       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:10:46.751803       1 config.go:315] "Starting node config controller"
	I0929 13:10:46.751815       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:10:46.852269       1 shared_informer.go:318] Caches are synced for service config
	I0929 13:10:46.852404       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:10:46.852413       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b0fcfda364a2df1cfab036555acee98a844fcc156eaa9ff263e3f93d0ed32525] <==
	I0929 13:10:43.364709       1 serving.go:348] Generated self-signed cert in-memory
	W0929 13:10:45.284550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:10:45.284588       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:10:45.284605       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:10:45.284617       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:10:45.307155       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 13:10:45.307258       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:45.309928       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:45.309976       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 13:10:45.310654       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 13:10:45.310683       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 13:10:45.411131       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 13:19:11 old-k8s-version-223488 kubelet[712]: E0929 13:19:11.141096     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:19:18 old-k8s-version-223488 kubelet[712]: I0929 13:19:18.139407     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:19:18 old-k8s-version-223488 kubelet[712]: E0929 13:19:18.139743     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:19:22 old-k8s-version-223488 kubelet[712]: E0929 13:19:22.140915     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:19:32 old-k8s-version-223488 kubelet[712]: I0929 13:19:32.140401     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:19:32 old-k8s-version-223488 kubelet[712]: E0929 13:19:32.140804     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:19:36 old-k8s-version-223488 kubelet[712]: E0929 13:19:36.140994     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:19:44 old-k8s-version-223488 kubelet[712]: I0929 13:19:44.140277     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:19:44 old-k8s-version-223488 kubelet[712]: E0929 13:19:44.140636     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:19:47 old-k8s-version-223488 kubelet[712]: E0929 13:19:47.489086     712 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:19:47 old-k8s-version-223488 kubelet[712]: E0929 13:19:47.489153     712 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:19:47 old-k8s-version-223488 kubelet[712]: E0929 13:19:47.489318     712 kuberuntime_manager.go:1209] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-79dc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTP
Headers:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-8694d4445c-gg4cr_kubernetes-dashboard(2a3f7370-a761-486c-993f-c0a0cc93ce6b): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have re
ached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Sep 29 13:19:47 old-k8s-version-223488 kubelet[712]: E0929 13:19:47.489382     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:19:51 old-k8s-version-223488 kubelet[712]: E0929 13:19:51.140695     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:19:57 old-k8s-version-223488 kubelet[712]: I0929 13:19:57.139285     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:19:57 old-k8s-version-223488 kubelet[712]: E0929 13:19:57.139565     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:19:58 old-k8s-version-223488 kubelet[712]: E0929 13:19:58.140824     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:20:03 old-k8s-version-223488 kubelet[712]: E0929 13:20:03.141431     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:20:09 old-k8s-version-223488 kubelet[712]: I0929 13:20:09.139399     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:20:09 old-k8s-version-223488 kubelet[712]: E0929 13:20:09.139668     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:20:12 old-k8s-version-223488 kubelet[712]: E0929 13:20:12.140378     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:20:14 old-k8s-version-223488 kubelet[712]: E0929 13:20:14.140390     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:20:24 old-k8s-version-223488 kubelet[712]: I0929 13:20:24.140170     712 scope.go:117] "RemoveContainer" containerID="d83afa17b6651ed642a8fad6a23ceed8e63a38640886d1e28f9baf45318c6a7c"
	Sep 29 13:20:24 old-k8s-version-223488 kubelet[712]: E0929 13:20:24.140601     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:20:27 old-k8s-version-223488 kubelet[712]: E0929 13:20:27.140495     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	
	
	==> storage-provisioner [6350254ce867f1801e14d2a1ff83cd80c271543e49f2885304e1f0d47425adda] <==
	I0929 13:10:46.637609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:16.641483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f1080a53e734ed2fc814679a4192cbd38ed15d4cab74d67f852ef3d4759cc815] <==
	I0929 13:11:17.350350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 13:11:17.359169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 13:11:17.359220       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 13:11:34.757171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 13:11:34.757340       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab!
	I0929 13:11:34.757322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be1543b8-78ff-45f5-b24f-0db84f9fdd32", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab became leader
	I0929 13:11:34.857600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-223488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr: exit status 1 (60.457516ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cmxv5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-gg4cr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dls8" [aae6c127-73bd-4658-8206-ab662eaea2b1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:20:43.08016773 +0000 UTC m=+3313.380204199
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-929827 describe po kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-929827 describe po kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-9dls8
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-929827/192.168.103.2
Start Time:       Mon, 29 Sep 2025 13:11:11 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8mhlf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-8mhlf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m31s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8 to no-preload-929827
Normal   Pulling    4m25s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m54s (x5 over 8m58s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m54s (x5 over 8m58s)   kubelet            Error: ErrImagePull
Warning  Failed     2m45s (x16 over 8m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    100s (x21 over 8m57s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard: exit status 1 (77.065943ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-9dls8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-929827
helpers_test.go:243: (dbg) docker inspect no-preload-929827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac",
	        "Created": "2025-09-29T13:09:36.134872723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 817261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:58.068596599Z",
	            "FinishedAt": "2025-09-29T13:10:57.197117344Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/hostname",
	        "HostsPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/hosts",
	        "LogPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac-json.log",
	        "Name": "/no-preload-929827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-929827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-929827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac",
	                "LowerDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-929827",
	                "Source": "/var/lib/docker/volumes/no-preload-929827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-929827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-929827",
	                "name.minikube.sigs.k8s.io": "no-preload-929827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5e0205b877974f862bf692adf980537493b00dd53d07253c81b9026c2e99739",
	            "SandboxKey": "/var/run/docker/netns/d5e0205b8779",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-929827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:c4:46:31:dc:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "df408269424551a4f38c50a43890d5ab69bd7640c4c8f425e46136888332a1e7",
	                    "EndpointID": "fefb57f53176d4c31f4392a8dcd3b010959999cdbd71ae0500a3e93debb86f54",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-929827",
	                        "143d78ecaef5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-929827 -n no-preload-929827
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-929827 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-929827 logs -n 25: (1.315188376s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-223488 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:14:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:14:01.801416  839515 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:14:01.801548  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801557  839515 out.go:374] Setting ErrFile to fd 2...
	I0929 13:14:01.801561  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801790  839515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:14:01.802369  839515 out.go:368] Setting JSON to false
	I0929 13:14:01.803835  839515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10587,"bootTime":1759141055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:14:01.803980  839515 start.go:140] virtualization: kvm guest
	I0929 13:14:01.806446  839515 out.go:179] * [default-k8s-diff-port-504443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:14:01.808471  839515 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:14:01.808488  839515 notify.go:220] Checking for updates...
	I0929 13:14:01.811422  839515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:14:01.813137  839515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:01.815358  839515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:14:01.817089  839515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:14:01.818747  839515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:14:01.820859  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:01.821367  839515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:14:01.850294  839515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:14:01.850496  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:01.920086  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.906779425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:01.920249  839515 docker.go:318] overlay module found
	I0929 13:14:01.923199  839515 out.go:179] * Using the docker driver based on existing profile
	I0929 13:14:01.924580  839515 start.go:304] selected driver: docker
	I0929 13:14:01.924604  839515 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:01.924742  839515 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:14:01.925594  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:02.004135  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.989084501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:02.004575  839515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:02.004635  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:02.004699  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:02.004749  839515 start.go:348] cluster config:
	{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:02.006556  839515 out.go:179] * Starting "default-k8s-diff-port-504443" primary control-plane node in "default-k8s-diff-port-504443" cluster
	I0929 13:14:02.007837  839515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:14:02.009404  839515 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:14:02.011260  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:02.011353  839515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:14:02.011371  839515 cache.go:58] Caching tarball of preloaded images
	I0929 13:14:02.011418  839515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:14:02.011589  839515 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:14:02.011606  839515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:14:02.011761  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.040696  839515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:14:02.040723  839515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:14:02.040747  839515 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:14:02.040778  839515 start.go:360] acquireMachinesLock for default-k8s-diff-port-504443: {Name:mkd1504d0afcb57e7e3a7d375c0d3d045f6ff0f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:14:02.040840  839515 start.go:364] duration metric: took 41.435µs to acquireMachinesLock for "default-k8s-diff-port-504443"
	I0929 13:14:02.040859  839515 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:14:02.040866  839515 fix.go:54] fixHost starting: 
	I0929 13:14:02.041151  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.065452  839515 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504443: state=Stopped err=<nil>
	W0929 13:14:02.065493  839515 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:14:00.890602  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:00.890614  837560 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:00.890670  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.892229  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:00.892253  837560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:00.892339  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.932762  837560 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:00.932828  837560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:00.932989  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.934137  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.945316  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.948654  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.961271  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:01.034193  837560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:01.056199  837560 node_ready.go:35] waiting up to 6m0s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:01.062352  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:01.074784  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:01.074816  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:01.080006  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:01.080035  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:01.096572  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:01.107273  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:01.107304  837560 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:01.123628  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:01.123736  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:01.159235  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.159267  837560 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:01.162841  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:01.163496  837560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:01.197386  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.198337  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:01.198359  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:01.226863  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:01.226900  837560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:01.252970  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:01.252998  837560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:01.278501  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:01.278527  837560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:01.303325  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:01.303366  837560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:01.329503  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:01.329532  837560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:01.353791  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:03.007947  837560 node_ready.go:49] node "embed-certs-144376" is "Ready"
	I0929 13:14:03.007988  837560 node_ready.go:38] duration metric: took 1.951746003s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:03.008006  837560 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:03.008068  837560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:03.686627  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.624233175s)
	I0929 13:14:03.686706  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.590098715s)
	I0929 13:14:03.686993  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489568477s)
	I0929 13:14:03.687027  837560 addons.go:479] Verifying addon metrics-server=true in "embed-certs-144376"
	I0929 13:14:03.687147  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.333304219s)
	I0929 13:14:03.687396  837560 api_server.go:72] duration metric: took 2.840723243s to wait for apiserver process to appear ...
	I0929 13:14:03.687413  837560 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:03.687434  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:03.689946  837560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-144376 addons enable metrics-server
	
	I0929 13:14:03.693918  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:03.693955  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:03.703949  837560 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:14:02.067503  839515 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-504443" ...
	I0929 13:14:02.067595  839515 cli_runner.go:164] Run: docker start default-k8s-diff-port-504443
	I0929 13:14:02.400205  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.426021  839515 kic.go:430] container "default-k8s-diff-port-504443" state is running.
	I0929 13:14:02.426697  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:02.452245  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.452576  839515 machine.go:93] provisionDockerMachine start ...
	I0929 13:14:02.452686  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:02.476313  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:02.476569  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:02.476592  839515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:14:02.477420  839515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45360->127.0.0.1:33463: read: connection reset by peer
	I0929 13:14:05.620847  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.620906  839515 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-504443"
	I0929 13:14:05.621012  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.641909  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.642258  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.642275  839515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504443 && echo "default-k8s-diff-port-504443" | sudo tee /etc/hostname
	I0929 13:14:05.804833  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.804936  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.826632  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.826863  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.826904  839515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504443/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:14:05.968467  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:14:05.968502  839515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:14:05.968535  839515 ubuntu.go:190] setting up certificates
	I0929 13:14:05.968548  839515 provision.go:84] configureAuth start
	I0929 13:14:05.968610  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:05.988690  839515 provision.go:143] copyHostCerts
	I0929 13:14:05.988763  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:14:05.988788  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:14:05.988904  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:14:05.989039  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:14:05.989049  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:14:05.989082  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:14:05.989162  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:14:05.989170  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:14:05.989196  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:14:05.989339  839515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504443 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-504443 localhost minikube]
	I0929 13:14:06.185911  839515 provision.go:177] copyRemoteCerts
	I0929 13:14:06.185989  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:14:06.186098  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.205790  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:06.309505  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:14:06.340444  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:14:06.372277  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:14:06.402506  839515 provision.go:87] duration metric: took 433.943194ms to configureAuth
	I0929 13:14:06.402539  839515 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:14:06.402765  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:06.402931  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.424941  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:06.425216  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:06.425243  839515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:14:06.741449  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:14:06.741480  839515 machine.go:96] duration metric: took 4.288878167s to provisionDockerMachine
	I0929 13:14:06.741495  839515 start.go:293] postStartSetup for "default-k8s-diff-port-504443" (driver="docker")
	I0929 13:14:06.741509  839515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:14:06.741575  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:14:06.741626  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.764273  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:03.706436  837560 addons.go:514] duration metric: took 2.859616556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:04.188145  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.194079  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.194114  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:04.687754  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.692514  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.692547  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.188198  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.193003  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:05.193033  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.687682  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.692821  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:14:05.694070  837560 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:05.694103  837560 api_server.go:131] duration metric: took 2.006683698s to wait for apiserver health ...
	I0929 13:14:05.694113  837560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:05.699584  837560 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:05.699638  837560 system_pods.go:61] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.699655  837560 system_pods.go:61] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.699667  837560 system_pods.go:61] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.699676  837560 system_pods.go:61] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.699687  837560 system_pods.go:61] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.699697  837560 system_pods.go:61] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.699711  837560 system_pods.go:61] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.699721  837560 system_pods.go:61] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.699734  837560 system_pods.go:61] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.699743  837560 system_pods.go:74] duration metric: took 5.622791ms to wait for pod list to return data ...
	I0929 13:14:05.699757  837560 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:05.703100  837560 default_sa.go:45] found service account: "default"
	I0929 13:14:05.703127  837560 default_sa.go:55] duration metric: took 3.363521ms for default service account to be created ...
	I0929 13:14:05.703137  837560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:05.712514  837560 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:05.712559  837560 system_pods.go:89] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.712571  837560 system_pods.go:89] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.712579  837560 system_pods.go:89] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.712592  837560 system_pods.go:89] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.712601  837560 system_pods.go:89] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.712614  837560 system_pods.go:89] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.712629  837560 system_pods.go:89] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.712643  837560 system_pods.go:89] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.712648  837560 system_pods.go:89] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.712659  837560 system_pods.go:126] duration metric: took 9.514361ms to wait for k8s-apps to be running ...
	I0929 13:14:05.712669  837560 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:05.712730  837560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:05.733971  837560 system_svc.go:56] duration metric: took 21.287495ms WaitForService to wait for kubelet
	I0929 13:14:05.734004  837560 kubeadm.go:578] duration metric: took 4.887332987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:05.734047  837560 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:05.737599  837560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:05.737632  837560 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:05.737645  837560 node_conditions.go:105] duration metric: took 3.59217ms to run NodePressure ...
	I0929 13:14:05.737660  837560 start.go:241] waiting for startup goroutines ...
	I0929 13:14:05.737667  837560 start.go:246] waiting for cluster config update ...
	I0929 13:14:05.737679  837560 start.go:255] writing updated cluster config ...
	I0929 13:14:05.738043  837560 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:05.743175  837560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:05.747563  837560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:07.753718  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:06.865904  839515 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:14:06.869732  839515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:14:06.869776  839515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:14:06.869789  839515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:14:06.869797  839515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:14:06.869820  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:14:06.869914  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:14:06.870040  839515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:14:06.870152  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:14:06.881041  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:06.910664  839515 start.go:296] duration metric: took 169.149248ms for postStartSetup
	I0929 13:14:06.910763  839515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:14:06.910806  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.930467  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.026128  839515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:14:07.031766  839515 fix.go:56] duration metric: took 4.990890676s for fixHost
	I0929 13:14:07.031793  839515 start.go:83] releasing machines lock for "default-k8s-diff-port-504443", held for 4.990942592s
	I0929 13:14:07.031878  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:07.050982  839515 ssh_runner.go:195] Run: cat /version.json
	I0929 13:14:07.051039  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.051090  839515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:14:07.051158  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.072609  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.072906  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.245633  839515 ssh_runner.go:195] Run: systemctl --version
	I0929 13:14:07.251713  839515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:14:07.405376  839515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:14:07.412347  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.424730  839515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:14:07.424820  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.436822  839515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:14:07.436852  839515 start.go:495] detecting cgroup driver to use...
	I0929 13:14:07.436922  839515 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:14:07.437079  839515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:14:07.451837  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:14:07.466730  839515 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:14:07.466785  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:14:07.482295  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:14:07.497182  839515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:14:07.573510  839515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:14:07.647720  839515 docker.go:234] disabling docker service ...
	I0929 13:14:07.647793  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:14:07.663956  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:14:07.678340  839515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:14:07.749850  839515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:14:07.833138  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:14:07.847332  839515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:14:07.869460  839515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:14:07.869534  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.882223  839515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:14:07.882304  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.895125  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.908850  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.925290  839515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:14:07.942174  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.956313  839515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.970510  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.984185  839515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:14:07.995199  839515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:14:08.006273  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.079146  839515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:14:08.201036  839515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:14:08.201135  839515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:14:08.205983  839515 start.go:563] Will wait 60s for crictl version
	I0929 13:14:08.206058  839515 ssh_runner.go:195] Run: which crictl
	I0929 13:14:08.210186  839515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:14:08.251430  839515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:14:08.251529  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.296851  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.339448  839515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:14:08.341414  839515 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-504443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:14:08.362344  839515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:14:08.367546  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.381721  839515 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:14:08.381862  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:08.381951  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.433062  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.433096  839515 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:14:08.433161  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.473938  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.473972  839515 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:14:08.473983  839515 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0929 13:14:08.474084  839515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-504443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:14:08.474149  839515 ssh_runner.go:195] Run: crio config
	I0929 13:14:08.535858  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:08.535928  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:08.535954  839515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:14:08.535987  839515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504443 NodeName:default-k8s-diff-port-504443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:14:08.536149  839515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504443"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:14:08.536221  839515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:14:08.549875  839515 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:14:08.549968  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:14:08.562591  839515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 13:14:08.588448  839515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:14:08.613818  839515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 13:14:08.637842  839515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:14:08.642571  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.658613  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.742685  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:08.769381  839515 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443 for IP: 192.168.76.2
	I0929 13:14:08.769408  839515 certs.go:194] generating shared ca certs ...
	I0929 13:14:08.769432  839515 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:08.769610  839515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:14:08.769690  839515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:14:08.769707  839515 certs.go:256] generating profile certs ...
	I0929 13:14:08.769830  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.key
	I0929 13:14:08.769913  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key.3fc9c8d4
	I0929 13:14:08.769963  839515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key
	I0929 13:14:08.770120  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:14:08.770170  839515 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:14:08.770186  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:14:08.770222  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:14:08.770264  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:14:08.770297  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:14:08.770375  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:08.771164  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:14:08.810187  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:14:08.852550  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:14:08.909671  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:14:08.944558  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:14:08.979658  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:14:09.015199  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:14:09.050930  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:14:09.086524  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:14:09.119207  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:14:09.151483  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:14:09.186734  839515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:14:09.211662  839515 ssh_runner.go:195] Run: openssl version
	I0929 13:14:09.219872  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:14:09.232974  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237506  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237581  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.247699  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:14:09.262697  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:14:09.277818  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283413  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283551  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.293753  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:14:09.307826  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:14:09.322785  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328680  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328758  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.337578  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:14:09.349565  839515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:14:09.355212  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:14:09.365031  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:14:09.376499  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:14:09.386571  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:14:09.396193  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:14:09.405722  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:14:09.416490  839515 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:09.416619  839515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:14:09.416692  839515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:14:09.480165  839515 cri.go:89] found id: ""
	I0929 13:14:09.480329  839515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:14:09.502356  839515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:14:09.502385  839515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:14:09.502465  839515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:14:09.516584  839515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:14:09.517974  839515 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-504443" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.518950  839515 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-564029/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-504443" cluster setting kubeconfig missing "default-k8s-diff-port-504443" context setting]
	I0929 13:14:09.520381  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.523350  839515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:14:09.540146  839515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:14:09.540271  839515 kubeadm.go:593] duration metric: took 37.87462ms to restartPrimaryControlPlane
	I0929 13:14:09.540292  839515 kubeadm.go:394] duration metric: took 123.821391ms to StartCluster
	I0929 13:14:09.540318  839515 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.540461  839515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.543243  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.543701  839515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:14:09.543964  839515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:14:09.544056  839515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544105  839515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544134  839515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504443"
	I0929 13:14:09.544215  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:09.544297  839515 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544313  839515 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544323  839515 addons.go:247] addon dashboard should already be in state true
	I0929 13:14:09.544356  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544499  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.544580  839515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544601  839515 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544610  839515 addons.go:247] addon metrics-server should already be in state true
	I0929 13:14:09.544638  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544779  839515 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544826  839515 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:14:09.544867  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544923  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545131  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545706  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.546905  839515 out.go:179] * Verifying Kubernetes components...
	I0929 13:14:09.548849  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:09.588222  839515 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.588254  839515 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:14:09.588394  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.589235  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.591356  839515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:14:09.592899  839515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.592920  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:14:09.592997  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.599097  839515 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:14:09.603537  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:09.603567  839515 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:09.603641  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.623364  839515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:14:09.625378  839515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:14:09.626964  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:09.626991  839515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:09.627087  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.646947  839515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.647072  839515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:09.647170  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.657171  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.660429  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.682698  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.694425  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.758623  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:09.782535  839515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:09.796122  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.824319  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:09.824349  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:09.831248  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:09.831269  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:09.857539  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.865401  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:09.865601  839515 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:09.868433  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:09.868454  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:09.911818  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.911849  839515 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:09.919662  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:09.919693  839515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:09.945916  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.956819  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:09.956847  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:09.983049  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:09.983088  839515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:10.008150  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:10.008187  839515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:10.035225  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:10.035255  839515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:10.063000  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:10.063033  839515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:10.088151  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:10.088182  839515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:10.111599  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:12.055468  839515 node_ready.go:49] node "default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:12.055507  839515 node_ready.go:38] duration metric: took 2.272916493s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:12.055524  839515 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:12.055588  839515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:12.693113  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.896952632s)
	I0929 13:14:12.693205  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.835545565s)
	I0929 13:14:12.693264  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.747320981s)
	I0929 13:14:12.693289  839515 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-504443"
	I0929 13:14:12.693401  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.581752595s)
	I0929 13:14:12.693437  839515 api_server.go:72] duration metric: took 3.149694543s to wait for apiserver process to appear ...
	I0929 13:14:12.693448  839515 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:12.693465  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:12.695374  839515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-504443 addons enable metrics-server
	
	I0929 13:14:12.698283  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:12.698311  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:12.701668  839515 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0929 13:14:09.762777  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:12.254708  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:12.703272  839515 addons.go:514] duration metric: took 3.159290714s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:13.194062  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.199962  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.200005  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:13.693647  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.699173  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.699207  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:14.193661  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:14.198386  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:14:14.199540  839515 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:14.199566  839515 api_server.go:131] duration metric: took 1.506111317s to wait for apiserver health ...
	I0929 13:14:14.199576  839515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:14.203404  839515 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:14.203444  839515 system_pods.go:61] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.203452  839515 system_pods.go:61] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.203458  839515 system_pods.go:61] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.203465  839515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.203471  839515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.203482  839515 system_pods.go:61] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.203495  839515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.203503  839515 system_pods.go:61] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.203512  839515 system_pods.go:61] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.203520  839515 system_pods.go:74] duration metric: took 3.93835ms to wait for pod list to return data ...
	I0929 13:14:14.203531  839515 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:14.206279  839515 default_sa.go:45] found service account: "default"
	I0929 13:14:14.206304  839515 default_sa.go:55] duration metric: took 2.763244ms for default service account to be created ...
	I0929 13:14:14.206315  839515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:14.209977  839515 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:14.210027  839515 system_pods.go:89] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.210040  839515 system_pods.go:89] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.210048  839515 system_pods.go:89] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.210057  839515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.210066  839515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.210073  839515 system_pods.go:89] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.210082  839515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.210089  839515 system_pods.go:89] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.210121  839515 system_pods.go:89] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.210130  839515 system_pods.go:126] duration metric: took 3.808134ms to wait for k8s-apps to be running ...
	I0929 13:14:14.210140  839515 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:14.210201  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:14.225164  839515 system_svc.go:56] duration metric: took 15.009784ms WaitForService to wait for kubelet
	I0929 13:14:14.225205  839515 kubeadm.go:578] duration metric: took 4.681459973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:14.225249  839515 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:14.228249  839515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:14.228290  839515 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:14.228307  839515 node_conditions.go:105] duration metric: took 3.048343ms to run NodePressure ...
	I0929 13:14:14.228326  839515 start.go:241] waiting for startup goroutines ...
	I0929 13:14:14.228336  839515 start.go:246] waiting for cluster config update ...
	I0929 13:14:14.228350  839515 start.go:255] writing updated cluster config ...
	I0929 13:14:14.228612  839515 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:14.233754  839515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:14.238169  839515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:16.244346  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:14.257696  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:16.754720  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:18.244963  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:20.245434  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:19.254143  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:21.754181  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:22.245771  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:24.743982  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:26.745001  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:23.755533  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:26.254152  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:29.244352  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:31.244535  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:28.753653  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:30.754009  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:33.744429  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:35.745000  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:33.254079  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:35.753251  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:37.754125  837560 pod_ready.go:94] pod "coredns-66bc5c9577-vrkvb" is "Ready"
	I0929 13:14:37.754153  837560 pod_ready.go:86] duration metric: took 32.006559006s for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.757295  837560 pod_ready.go:83] waiting for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.762511  837560 pod_ready.go:94] pod "etcd-embed-certs-144376" is "Ready"
	I0929 13:14:37.762543  837560 pod_ready.go:86] duration metric: took 5.214008ms for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.765205  837560 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.769732  837560 pod_ready.go:94] pod "kube-apiserver-embed-certs-144376" is "Ready"
	I0929 13:14:37.769763  837560 pod_ready.go:86] duration metric: took 4.5304ms for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.772045  837560 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.952582  837560 pod_ready.go:94] pod "kube-controller-manager-embed-certs-144376" is "Ready"
	I0929 13:14:37.952613  837560 pod_ready.go:86] duration metric: took 180.54484ms for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.152075  837560 pod_ready.go:83] waiting for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.552510  837560 pod_ready.go:94] pod "kube-proxy-bdkrl" is "Ready"
	I0929 13:14:38.552543  837560 pod_ready.go:86] duration metric: took 400.438224ms for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.751930  837560 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152918  837560 pod_ready.go:94] pod "kube-scheduler-embed-certs-144376" is "Ready"
	I0929 13:14:39.152978  837560 pod_ready.go:86] duration metric: took 401.010043ms for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152998  837560 pod_ready.go:40] duration metric: took 33.409779031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:39.200854  837560 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:39.202814  837560 out.go:179] * Done! kubectl is now configured to use "embed-certs-144376" cluster and "default" namespace by default
	W0929 13:14:38.244646  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:40.745094  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:43.243922  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:45.744130  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	I0929 13:14:46.743671  839515 pod_ready.go:94] pod "coredns-66bc5c9577-prpff" is "Ready"
	I0929 13:14:46.743700  839515 pod_ready.go:86] duration metric: took 32.505501945s for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.746421  839515 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.752034  839515 pod_ready.go:94] pod "etcd-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.752061  839515 pod_ready.go:86] duration metric: took 5.610516ms for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.754137  839515 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.758705  839515 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.758739  839515 pod_ready.go:86] duration metric: took 4.576444ms for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.761180  839515 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.941521  839515 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.941552  839515 pod_ready.go:86] duration metric: took 180.339824ms for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.141974  839515 pod_ready.go:83] waiting for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.541782  839515 pod_ready.go:94] pod "kube-proxy-vcsfr" is "Ready"
	I0929 13:14:47.541812  839515 pod_ready.go:86] duration metric: took 399.809326ms for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.742034  839515 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142534  839515 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:48.142565  839515 pod_ready.go:86] duration metric: took 400.492621ms for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142578  839515 pod_ready.go:40] duration metric: took 33.908786928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:48.192681  839515 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:48.194961  839515 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-504443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 13:19:15 no-preload-929827 crio[562]: time="2025-09-29 13:19:15.142771447Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d6b70ba6-5393-4069-806f-66eb182d8159 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:18 no-preload-929827 crio[562]: time="2025-09-29 13:19:18.141322888Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1daa628d-d413-4a6b-a311-29811ba1f2d7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:18 no-preload-929827 crio[562]: time="2025-09-29 13:19:18.141687869Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1daa628d-d413-4a6b-a311-29811ba1f2d7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.141733804Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4cc1ac81-ea2a-4ae6-b4bc-56a61b80678f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.141743094Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bbc8f4af-144e-4f13-99fb-7c97b4361887 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.142094333Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4cc1ac81-ea2a-4ae6-b4bc-56a61b80678f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.142219542Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=bbc8f4af-144e-4f13-99fb-7c97b4361887 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.142761583Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=216df903-d414-477f-a07e-026bb0038b86 name=/runtime.v1.ImageService/PullImage
	Sep 29 13:19:30 no-preload-929827 crio[562]: time="2025-09-29 13:19:30.144195601Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:19:44 no-preload-929827 crio[562]: time="2025-09-29 13:19:44.141685422Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d3b6c8c7-355a-4d07-96ea-679b11c550cd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:44 no-preload-929827 crio[562]: time="2025-09-29 13:19:44.142027320Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d3b6c8c7-355a-4d07-96ea-679b11c550cd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:57 no-preload-929827 crio[562]: time="2025-09-29 13:19:57.142088235Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ebaa761e-e1f2-4c6a-8a93-85b86da78a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:19:57 no-preload-929827 crio[562]: time="2025-09-29 13:19:57.142410966Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ebaa761e-e1f2-4c6a-8a93-85b86da78a12 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:12 no-preload-929827 crio[562]: time="2025-09-29 13:20:12.141951183Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=58875376-477c-4369-9524-cdc85c2dc212 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:12 no-preload-929827 crio[562]: time="2025-09-29 13:20:12.142240035Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=58875376-477c-4369-9524-cdc85c2dc212 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:14 no-preload-929827 crio[562]: time="2025-09-29 13:20:14.141264866Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e7185336-ab5b-4a38-ad53-f5b4e1df39a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:14 no-preload-929827 crio[562]: time="2025-09-29 13:20:14.141548002Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e7185336-ab5b-4a38-ad53-f5b4e1df39a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:26 no-preload-929827 crio[562]: time="2025-09-29 13:20:26.141667196Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e53d2dfc-aba5-43b4-b09a-84ddb6131f9a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:26 no-preload-929827 crio[562]: time="2025-09-29 13:20:26.141954966Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e53d2dfc-aba5-43b4-b09a-84ddb6131f9a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:29 no-preload-929827 crio[562]: time="2025-09-29 13:20:29.141387020Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1752fecd-aa4f-4116-9381-c0ba83d5881e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:29 no-preload-929827 crio[562]: time="2025-09-29 13:20:29.141714837Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1752fecd-aa4f-4116-9381-c0ba83d5881e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:37 no-preload-929827 crio[562]: time="2025-09-29 13:20:37.142392962Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8277221c-4df8-41a2-b2e0-b4c3a9adb019 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:37 no-preload-929827 crio[562]: time="2025-09-29 13:20:37.142681167Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8277221c-4df8-41a2-b2e0-b4c3a9adb019 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:44 no-preload-929827 crio[562]: time="2025-09-29 13:20:44.141488039Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d6b72166-292a-499c-8c1c-095b13d997d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:20:44 no-preload-929827 crio[562]: time="2025-09-29 13:20:44.141820558Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d6b72166-292a-499c-8c1c-095b13d997d5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0534553f92453       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   fb718710d4565       dashboard-metrics-scraper-6ffb444bf9-vf7bg
	c83a8bad7ddf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   cd69e02213daa       storage-provisioner
	8fa465feaff34       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   6ea557efa4c5c       coredns-66bc5c9577-w9q72
	3d462f220f279       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   6c270d20e3a07       busybox
	a1328a5fb4884       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   1ba97ec482053       kindnet-q7vkx
	49f4eabe0b833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   cd69e02213daa       storage-provisioner
	96f6608315031       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   b9f9d50dd0b9d       kube-proxy-hxs55
	24ab90d24d8cc       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   2fd2fc10d5f25       kube-controller-manager-no-preload-929827
	f91e471fbeff1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   9701e4244e31e       kube-scheduler-no-preload-929827
	4f40a87ba6d97       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   fd6ef2e726ba2       etcd-no-preload-929827
	6bd50ae447d36       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   a97fb7662f9ca       kube-apiserver-no-preload-929827
	
	
	==> coredns [8fa465feaff34d461599f88d30ba96936af260e889f95893cd7a4b5ac8ddf10f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38086 - 2042 "HINFO IN 5773216621506957702.4151051612224799096. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099920741s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-929827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-929827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-929827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:10:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-929827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:18:26 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:18:26 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:18:26 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:18:26 +0000   Mon, 29 Sep 2025 13:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-929827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7490fe7b8e6c48fdbf612d06b66fe080
	  System UUID:                f34f8961-8004-415b-80a2-8959d9202514
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-w9q72                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-929827                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-q7vkx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-929827              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-929827     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-hxs55                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-929827              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-wf2g9               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vf7bg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9dls8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node no-preload-929827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-929827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node no-preload-929827 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node no-preload-929827 event: Registered Node no-preload-929827 in Controller
	  Normal  NodeReady                10m                    kubelet          Node no-preload-929827 status is now: NodeReady
	  Normal  Starting                 9m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet          Node no-preload-929827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet          Node no-preload-929827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x8 over 9m39s)  kubelet          Node no-preload-929827 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m33s                  node-controller  Node no-preload-929827 event: Registered Node no-preload-929827 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [4f40a87ba6d978785059adb6668c6f202a689264a19faec6e454909ae17ce1d2] <==
	{"level":"warn","ts":"2025-09-29T13:11:07.134196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.143082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.150742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.159238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.166277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.172732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.179461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.186361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.193546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.200361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.207300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.214990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.238924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.245450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.252281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.305091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:17.118099Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.788768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:17.118200Z","caller":"traceutil/trace.go:172","msg":"trace[963470827] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:705; }","duration":"185.899327ms","start":"2025-09-29T13:12:16.932287Z","end":"2025-09-29T13:12:17.118187Z","steps":["trace[963470827] 'range keys from in-memory index tree'  (duration: 185.708627ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:12:17.118069Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.544923ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:17.118275Z","caller":"traceutil/trace.go:172","msg":"trace[1903341512] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:705; }","duration":"108.768336ms","start":"2025-09-29T13:12:17.009492Z","end":"2025-09-29T13:12:17.118260Z","steps":["trace[1903341512] 'range keys from in-memory index tree'  (duration: 108.503547ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:56.373385Z","caller":"traceutil/trace.go:172","msg":"trace[169106305] transaction","detail":"{read_only:false; response_revision:755; number_of_response:1; }","duration":"224.492701ms","start":"2025-09-29T13:12:56.148875Z","end":"2025-09-29T13:12:56.373368Z","steps":["trace[169106305] 'process raft request'  (duration: 137.077668ms)","trace[169106305] 'compare'  (duration: 87.32589ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:12:57.023342Z","caller":"traceutil/trace.go:172","msg":"trace[1751101268] transaction","detail":"{read_only:false; response_revision:757; number_of_response:1; }","duration":"222.098886ms","start":"2025-09-29T13:12:56.801227Z","end":"2025-09-29T13:12:57.023326Z","steps":["trace[1751101268] 'process raft request'  (duration: 221.803733ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:57.247186Z","caller":"traceutil/trace.go:172","msg":"trace[831769672] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"191.435845ms","start":"2025-09-29T13:12:57.055734Z","end":"2025-09-29T13:12:57.247170Z","steps":["trace[831769672] 'process raft request'  (duration: 98.701642ms)","trace[831769672] 'compare'  (duration: 92.55563ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T13:12:57.555396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.800071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:57.555472Z","caller":"traceutil/trace.go:172","msg":"trace[89482518] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:758; }","duration":"189.894018ms","start":"2025-09-29T13:12:57.365565Z","end":"2025-09-29T13:12:57.555459Z","steps":["trace[89482518] 'range keys from in-memory index tree'  (duration: 189.7243ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:20:44 up  3:03,  0 users,  load average: 0.54, 1.12, 1.67
	Linux no-preload-929827 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a1328a5fb48841316a3fb17d07e53f9189c3c039511d5573c077bfc7bf1656b9] <==
	I0929 13:18:39.005577       1 main.go:301] handling current node
	I0929 13:18:49.005435       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:49.005473       1 main.go:301] handling current node
	I0929 13:18:58.998615       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:58.998648       1 main.go:301] handling current node
	I0929 13:19:09.005773       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:09.005809       1 main.go:301] handling current node
	I0929 13:19:19.003174       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:19.003206       1 main.go:301] handling current node
	I0929 13:19:29.006006       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:29.006050       1 main.go:301] handling current node
	I0929 13:19:39.006140       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:39.006179       1 main.go:301] handling current node
	I0929 13:19:49.005121       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:49.005167       1 main.go:301] handling current node
	I0929 13:19:59.006304       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:59.006347       1 main.go:301] handling current node
	I0929 13:20:08.998041       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:20:08.998087       1 main.go:301] handling current node
	I0929 13:20:19.000125       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:20:19.000172       1 main.go:301] handling current node
	I0929 13:20:28.999140       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:20:28.999172       1 main.go:301] handling current node
	I0929 13:20:39.005970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:20:39.006018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bd50ae447d368f855a5139301d83275bd68e0a665d001d305b5ceb6bd1d7d7e] <==
	W0929 13:17:08.723018       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:17:08.723079       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:17:08.723099       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:17:08.724160       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:17:08.724237       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:17:08.724249       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:17:20.827727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:17:30.664266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:18:23.805707       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:18:53.256198       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:19:08.724310       1 handler_proxy.go:99] no RequestInfo found in the context
	W0929 13:19:08.724356       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:19:08.724402       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:19:08.724416       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 13:19:08.724430       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:19:08.725577       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:19:31.705396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:01.030957       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:43.470088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [24ab90d24d8cc43671fbc38a60650dcac255ec255ecd0edb7e610546456099f7] <==
	I0929 13:14:41.196396       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:11.160141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:11.204289       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:41.165242       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:41.211348       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:11.170373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:11.219376       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:41.174810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:41.226937       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:11.179077       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:11.234296       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:41.183270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:41.242846       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:11.187553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:11.250929       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:41.191732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:41.258799       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:11.195844       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:11.266723       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:41.200352       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:41.274826       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:11.205106       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:11.282011       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:41.209819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:41.289652       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [96f6608315031a72b38bee0947b7434da1e1f451ea3e30db7e84f3293c7add36] <==
	I0929 13:11:08.676516       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:11:08.730961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:11:08.831763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:11:08.831802       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:11:08.831945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:11:08.855126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:11:08.855188       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:11:08.861688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:11:08.862143       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:11:08.862176       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:11:08.864119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:11:08.864139       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:11:08.864156       1 config.go:309] "Starting node config controller"
	I0929 13:11:08.864169       1 config.go:200] "Starting service config controller"
	I0929 13:11:08.864175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:11:08.864169       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:11:08.864198       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:11:08.864213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:11:08.964960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:11:08.964998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:11:08.965017       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:11:08.965032       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f91e471fbeff1a6805284409bb41c627dddaaa8d0182d3c0ecf575635e0c4555] <==
	I0929 13:11:06.408408       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:11:07.713146       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:11:07.713203       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:11:07.713215       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:11:07.713224       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:11:07.744610       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:11:07.744638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:11:07.746492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:11:07.746608       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:11:07.746925       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:11:07.747006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:11:07.847171       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:19:57 no-preload-929827 kubelet[696]: E0929 13:19:57.142741     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:20:01 no-preload-929827 kubelet[696]: E0929 13:20:01.479125     696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:20:01 no-preload-929827 kubelet[696]: E0929 13:20:01.479189     696 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:20:01 no-preload-929827 kubelet[696]: E0929 13:20:01.479293     696 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-9dls8_kubernetes-dashboard(aae6c127-73bd-4658-8206-ab662eaea2b1): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 13:20:01 no-preload-929827 kubelet[696]: E0929 13:20:01.479330     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:20:04 no-preload-929827 kubelet[696]: I0929 13:20:04.141707     696 scope.go:117] "RemoveContainer" containerID="0534553f92453e186f53a67c70e0ffa1112a39c9fad0b6cc7cd261877cd6645c"
	Sep 29 13:20:04 no-preload-929827 kubelet[696]: E0929 13:20:04.141873     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:20:05 no-preload-929827 kubelet[696]: E0929 13:20:05.212249     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152005212006549  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:05 no-preload-929827 kubelet[696]: E0929 13:20:05.212291     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152005212006549  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:12 no-preload-929827 kubelet[696]: E0929 13:20:12.142565     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:20:14 no-preload-929827 kubelet[696]: E0929 13:20:14.141903     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:20:15 no-preload-929827 kubelet[696]: E0929 13:20:15.213635     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152015213412152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:15 no-preload-929827 kubelet[696]: E0929 13:20:15.213665     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152015213412152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:17 no-preload-929827 kubelet[696]: I0929 13:20:17.141424     696 scope.go:117] "RemoveContainer" containerID="0534553f92453e186f53a67c70e0ffa1112a39c9fad0b6cc7cd261877cd6645c"
	Sep 29 13:20:17 no-preload-929827 kubelet[696]: E0929 13:20:17.141661     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:20:25 no-preload-929827 kubelet[696]: E0929 13:20:25.215027     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152025214747254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:25 no-preload-929827 kubelet[696]: E0929 13:20:25.215073     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152025214747254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:26 no-preload-929827 kubelet[696]: E0929 13:20:26.142312     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:20:29 no-preload-929827 kubelet[696]: E0929 13:20:29.142103     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:20:32 no-preload-929827 kubelet[696]: I0929 13:20:32.141150     696 scope.go:117] "RemoveContainer" containerID="0534553f92453e186f53a67c70e0ffa1112a39c9fad0b6cc7cd261877cd6645c"
	Sep 29 13:20:32 no-preload-929827 kubelet[696]: E0929 13:20:32.141434     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:20:35 no-preload-929827 kubelet[696]: E0929 13:20:35.216339     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152035216113781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:35 no-preload-929827 kubelet[696]: E0929 13:20:35.216427     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152035216113781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:20:37 no-preload-929827 kubelet[696]: E0929 13:20:37.143029     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:20:44 no-preload-929827 kubelet[696]: E0929 13:20:44.142214     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	
	
	==> storage-provisioner [49f4eabe0b833d137c7c6ba8f9503c33dce71d7c3d65115d837d5f6594f7ee8b] <==
	I0929 13:11:08.626369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:38.631834       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c83a8bad7ddf0c4db96542bb906f5eb729c7a0d1960ef2624cbdff59f7811750] <==
	W0929 13:20:19.103486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:21.106578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:21.112822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:23.116792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:23.120967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:25.124295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:25.128771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:27.131610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:27.136829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:29.139819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:29.144128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:31.148019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:31.152356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:33.156284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:33.160545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:35.163599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:35.168257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:37.171587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:37.175504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:39.178539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:39.183726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:41.187443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:41.192009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:43.195752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:43.200278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-929827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8: exit status 1 (63.548207ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-wf2g9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9dls8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zmzj7" [3d7707ff-be06-433e-a8ea-a5478e606f81] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:23:39.90332546 +0000 UTC m=+3490.203361928
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-144376 describe po kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-144376 describe po kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-zmzj7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-144376/192.168.85.2
Start Time:       Mon, 29 Sep 2025 13:14:07 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5kf5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r5kf5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7 to embed-certs-144376
Normal   Pulling    4m23s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m52s (x5 over 8m57s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m52s (x5 over 8m57s)   kubelet            Error: ErrImagePull
Warning  Failed     2m48s (x16 over 8m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    104s (x21 over 8m56s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard: exit status 1 (80.743629ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-zmzj7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-144376
helpers_test.go:243: (dbg) docker inspect embed-certs-144376:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316",
	        "Created": "2025-09-29T13:12:18.279731139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 837752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:13:53.446183728Z",
	            "FinishedAt": "2025-09-29T13:13:52.534833272Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/hostname",
	        "HostsPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/hosts",
	        "LogPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316-json.log",
	        "Name": "/embed-certs-144376",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-144376:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-144376",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316",
	                "LowerDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-144376",
	                "Source": "/var/lib/docker/volumes/embed-certs-144376/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-144376",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-144376",
	                "name.minikube.sigs.k8s.io": "embed-certs-144376",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "622f9248a0d3a1bfda6c0b8dbad3656d816d31cf4ff76fdea36ae38c0f1862fa",
	            "SandboxKey": "/var/run/docker/netns/622f9248a0d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-144376": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:1d:81:10:62:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a07eab151337df82bb396e6f2b16fcc57dcc4e80efb3e20e1c2d63c513de844",
	                    "EndpointID": "dd90e325237351bf982579accbb7cff937c3e35e2d74f5d34f09c0838c0f3f25",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-144376",
	                        "66bd64cb0222"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-144376 -n embed-certs-144376
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-144376 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-144376 logs -n 25: (1.337805086s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-223488 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:14:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:14:01.801416  839515 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:14:01.801548  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801557  839515 out.go:374] Setting ErrFile to fd 2...
	I0929 13:14:01.801561  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801790  839515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:14:01.802369  839515 out.go:368] Setting JSON to false
	I0929 13:14:01.803835  839515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10587,"bootTime":1759141055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:14:01.803980  839515 start.go:140] virtualization: kvm guest
	I0929 13:14:01.806446  839515 out.go:179] * [default-k8s-diff-port-504443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:14:01.808471  839515 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:14:01.808488  839515 notify.go:220] Checking for updates...
	I0929 13:14:01.811422  839515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:14:01.813137  839515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:01.815358  839515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:14:01.817089  839515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:14:01.818747  839515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:14:01.820859  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:01.821367  839515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:14:01.850294  839515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:14:01.850496  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:01.920086  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.906779425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:01.920249  839515 docker.go:318] overlay module found
	I0929 13:14:01.923199  839515 out.go:179] * Using the docker driver based on existing profile
	I0929 13:14:01.924580  839515 start.go:304] selected driver: docker
	I0929 13:14:01.924604  839515 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:01.924742  839515 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:14:01.925594  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:02.004135  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.989084501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:02.004575  839515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:02.004635  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:02.004699  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:02.004749  839515 start.go:348] cluster config:
	{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:02.006556  839515 out.go:179] * Starting "default-k8s-diff-port-504443" primary control-plane node in "default-k8s-diff-port-504443" cluster
	I0929 13:14:02.007837  839515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:14:02.009404  839515 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:14:02.011260  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:02.011353  839515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:14:02.011371  839515 cache.go:58] Caching tarball of preloaded images
	I0929 13:14:02.011418  839515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:14:02.011589  839515 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:14:02.011606  839515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:14:02.011761  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.040696  839515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:14:02.040723  839515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:14:02.040747  839515 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:14:02.040778  839515 start.go:360] acquireMachinesLock for default-k8s-diff-port-504443: {Name:mkd1504d0afcb57e7e3a7d375c0d3d045f6ff0f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:14:02.040840  839515 start.go:364] duration metric: took 41.435µs to acquireMachinesLock for "default-k8s-diff-port-504443"
	I0929 13:14:02.040859  839515 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:14:02.040866  839515 fix.go:54] fixHost starting: 
	I0929 13:14:02.041151  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.065452  839515 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504443: state=Stopped err=<nil>
	W0929 13:14:02.065493  839515 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:14:00.890602  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:00.890614  837560 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:00.890670  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.892229  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:00.892253  837560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:00.892339  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.932762  837560 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:00.932828  837560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:00.932989  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.934137  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.945316  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.948654  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.961271  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:01.034193  837560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:01.056199  837560 node_ready.go:35] waiting up to 6m0s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:01.062352  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:01.074784  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:01.074816  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:01.080006  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:01.080035  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:01.096572  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:01.107273  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:01.107304  837560 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:01.123628  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:01.123736  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:01.159235  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.159267  837560 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:01.162841  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:01.163496  837560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:01.197386  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.198337  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:01.198359  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:01.226863  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:01.226900  837560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:01.252970  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:01.252998  837560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:01.278501  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:01.278527  837560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:01.303325  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:01.303366  837560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:01.329503  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:01.329532  837560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:01.353791  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:03.007947  837560 node_ready.go:49] node "embed-certs-144376" is "Ready"
	I0929 13:14:03.007988  837560 node_ready.go:38] duration metric: took 1.951746003s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:03.008006  837560 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:03.008068  837560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:03.686627  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.624233175s)
	I0929 13:14:03.686706  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.590098715s)
	I0929 13:14:03.686993  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489568477s)
	I0929 13:14:03.687027  837560 addons.go:479] Verifying addon metrics-server=true in "embed-certs-144376"
	I0929 13:14:03.687147  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.333304219s)
	I0929 13:14:03.687396  837560 api_server.go:72] duration metric: took 2.840723243s to wait for apiserver process to appear ...
	I0929 13:14:03.687413  837560 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:03.687434  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:03.689946  837560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-144376 addons enable metrics-server
	
	I0929 13:14:03.693918  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:03.693955  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:03.703949  837560 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:14:02.067503  839515 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-504443" ...
	I0929 13:14:02.067595  839515 cli_runner.go:164] Run: docker start default-k8s-diff-port-504443
	I0929 13:14:02.400205  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.426021  839515 kic.go:430] container "default-k8s-diff-port-504443" state is running.
	I0929 13:14:02.426697  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:02.452245  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.452576  839515 machine.go:93] provisionDockerMachine start ...
	I0929 13:14:02.452686  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:02.476313  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:02.476569  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:02.476592  839515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:14:02.477420  839515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45360->127.0.0.1:33463: read: connection reset by peer
	I0929 13:14:05.620847  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.620906  839515 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-504443"
	I0929 13:14:05.621012  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.641909  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.642258  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.642275  839515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504443 && echo "default-k8s-diff-port-504443" | sudo tee /etc/hostname
	I0929 13:14:05.804833  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.804936  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.826632  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.826863  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.826904  839515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504443/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:14:05.968467  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:14:05.968502  839515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:14:05.968535  839515 ubuntu.go:190] setting up certificates
	I0929 13:14:05.968548  839515 provision.go:84] configureAuth start
	I0929 13:14:05.968610  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:05.988690  839515 provision.go:143] copyHostCerts
	I0929 13:14:05.988763  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:14:05.988788  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:14:05.988904  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:14:05.989039  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:14:05.989049  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:14:05.989082  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:14:05.989162  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:14:05.989170  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:14:05.989196  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:14:05.989339  839515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504443 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-504443 localhost minikube]
	I0929 13:14:06.185911  839515 provision.go:177] copyRemoteCerts
	I0929 13:14:06.185989  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:14:06.186098  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.205790  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:06.309505  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:14:06.340444  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:14:06.372277  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:14:06.402506  839515 provision.go:87] duration metric: took 433.943194ms to configureAuth
	I0929 13:14:06.402539  839515 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:14:06.402765  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:06.402931  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.424941  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:06.425216  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:06.425243  839515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:14:06.741449  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:14:06.741480  839515 machine.go:96] duration metric: took 4.288878167s to provisionDockerMachine
	I0929 13:14:06.741495  839515 start.go:293] postStartSetup for "default-k8s-diff-port-504443" (driver="docker")
	I0929 13:14:06.741509  839515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:14:06.741575  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:14:06.741626  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.764273  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:03.706436  837560 addons.go:514] duration metric: took 2.859616556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:04.188145  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.194079  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.194114  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:04.687754  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.692514  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.692547  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.188198  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.193003  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:05.193033  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.687682  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.692821  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:14:05.694070  837560 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:05.694103  837560 api_server.go:131] duration metric: took 2.006683698s to wait for apiserver health ...
	I0929 13:14:05.694113  837560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:05.699584  837560 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:05.699638  837560 system_pods.go:61] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.699655  837560 system_pods.go:61] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.699667  837560 system_pods.go:61] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.699676  837560 system_pods.go:61] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.699687  837560 system_pods.go:61] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.699697  837560 system_pods.go:61] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.699711  837560 system_pods.go:61] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.699721  837560 system_pods.go:61] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.699734  837560 system_pods.go:61] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.699743  837560 system_pods.go:74] duration metric: took 5.622791ms to wait for pod list to return data ...
	I0929 13:14:05.699757  837560 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:05.703100  837560 default_sa.go:45] found service account: "default"
	I0929 13:14:05.703127  837560 default_sa.go:55] duration metric: took 3.363521ms for default service account to be created ...
	I0929 13:14:05.703137  837560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:05.712514  837560 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:05.712559  837560 system_pods.go:89] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.712571  837560 system_pods.go:89] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.712579  837560 system_pods.go:89] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.712592  837560 system_pods.go:89] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.712601  837560 system_pods.go:89] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.712614  837560 system_pods.go:89] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.712629  837560 system_pods.go:89] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.712643  837560 system_pods.go:89] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.712648  837560 system_pods.go:89] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.712659  837560 system_pods.go:126] duration metric: took 9.514361ms to wait for k8s-apps to be running ...
	I0929 13:14:05.712669  837560 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:05.712730  837560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:05.733971  837560 system_svc.go:56] duration metric: took 21.287495ms WaitForService to wait for kubelet
	I0929 13:14:05.734004  837560 kubeadm.go:578] duration metric: took 4.887332987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:05.734047  837560 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:05.737599  837560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:05.737632  837560 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:05.737645  837560 node_conditions.go:105] duration metric: took 3.59217ms to run NodePressure ...
	I0929 13:14:05.737660  837560 start.go:241] waiting for startup goroutines ...
	I0929 13:14:05.737667  837560 start.go:246] waiting for cluster config update ...
	I0929 13:14:05.737679  837560 start.go:255] writing updated cluster config ...
	I0929 13:14:05.738043  837560 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:05.743175  837560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:05.747563  837560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:07.753718  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:06.865904  839515 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:14:06.869732  839515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:14:06.869776  839515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:14:06.869789  839515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:14:06.869797  839515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:14:06.869820  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:14:06.869914  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:14:06.870040  839515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:14:06.870152  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:14:06.881041  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:06.910664  839515 start.go:296] duration metric: took 169.149248ms for postStartSetup
	I0929 13:14:06.910763  839515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:14:06.910806  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.930467  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.026128  839515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:14:07.031766  839515 fix.go:56] duration metric: took 4.990890676s for fixHost
	I0929 13:14:07.031793  839515 start.go:83] releasing machines lock for "default-k8s-diff-port-504443", held for 4.990942592s
	I0929 13:14:07.031878  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:07.050982  839515 ssh_runner.go:195] Run: cat /version.json
	I0929 13:14:07.051039  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.051090  839515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:14:07.051158  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.072609  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.072906  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.245633  839515 ssh_runner.go:195] Run: systemctl --version
	I0929 13:14:07.251713  839515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:14:07.405376  839515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:14:07.412347  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.424730  839515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:14:07.424820  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.436822  839515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:14:07.436852  839515 start.go:495] detecting cgroup driver to use...
	I0929 13:14:07.436922  839515 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:14:07.437079  839515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:14:07.451837  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:14:07.466730  839515 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:14:07.466785  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:14:07.482295  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:14:07.497182  839515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:14:07.573510  839515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:14:07.647720  839515 docker.go:234] disabling docker service ...
	I0929 13:14:07.647793  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:14:07.663956  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:14:07.678340  839515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:14:07.749850  839515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:14:07.833138  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:14:07.847332  839515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:14:07.869460  839515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:14:07.869534  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.882223  839515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:14:07.882304  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.895125  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.908850  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.925290  839515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:14:07.942174  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.956313  839515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.970510  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.984185  839515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:14:07.995199  839515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:14:08.006273  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.079146  839515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:14:08.201036  839515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:14:08.201135  839515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:14:08.205983  839515 start.go:563] Will wait 60s for crictl version
	I0929 13:14:08.206058  839515 ssh_runner.go:195] Run: which crictl
	I0929 13:14:08.210186  839515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:14:08.251430  839515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:14:08.251529  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.296851  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.339448  839515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:14:08.341414  839515 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-504443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:14:08.362344  839515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:14:08.367546  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.381721  839515 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:14:08.381862  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:08.381951  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.433062  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.433096  839515 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:14:08.433161  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.473938  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.473972  839515 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:14:08.473983  839515 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0929 13:14:08.474084  839515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-504443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:14:08.474149  839515 ssh_runner.go:195] Run: crio config
	I0929 13:14:08.535858  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:08.535928  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:08.535954  839515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:14:08.535987  839515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504443 NodeName:default-k8s-diff-port-504443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:14:08.536149  839515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504443"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:14:08.536221  839515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:14:08.549875  839515 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:14:08.549968  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:14:08.562591  839515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 13:14:08.588448  839515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:14:08.613818  839515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 13:14:08.637842  839515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:14:08.642571  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.658613  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.742685  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:08.769381  839515 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443 for IP: 192.168.76.2
	I0929 13:14:08.769408  839515 certs.go:194] generating shared ca certs ...
	I0929 13:14:08.769432  839515 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:08.769610  839515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:14:08.769690  839515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:14:08.769707  839515 certs.go:256] generating profile certs ...
	I0929 13:14:08.769830  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.key
	I0929 13:14:08.769913  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key.3fc9c8d4
	I0929 13:14:08.769963  839515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key
	I0929 13:14:08.770120  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:14:08.770170  839515 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:14:08.770186  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:14:08.770222  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:14:08.770264  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:14:08.770297  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:14:08.770375  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:08.771164  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:14:08.810187  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:14:08.852550  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:14:08.909671  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:14:08.944558  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:14:08.979658  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:14:09.015199  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:14:09.050930  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:14:09.086524  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:14:09.119207  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:14:09.151483  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:14:09.186734  839515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:14:09.211662  839515 ssh_runner.go:195] Run: openssl version
	I0929 13:14:09.219872  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:14:09.232974  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237506  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237581  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.247699  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:14:09.262697  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:14:09.277818  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283413  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283551  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.293753  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:14:09.307826  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:14:09.322785  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328680  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328758  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.337578  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:14:09.349565  839515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:14:09.355212  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:14:09.365031  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:14:09.376499  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:14:09.386571  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:14:09.396193  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:14:09.405722  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:14:09.416490  839515 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:09.416619  839515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:14:09.416692  839515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:14:09.480165  839515 cri.go:89] found id: ""
	I0929 13:14:09.480329  839515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:14:09.502356  839515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:14:09.502385  839515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:14:09.502465  839515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:14:09.516584  839515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:14:09.517974  839515 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-504443" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.518950  839515 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-564029/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-504443" cluster setting kubeconfig missing "default-k8s-diff-port-504443" context setting]
	I0929 13:14:09.520381  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.523350  839515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:14:09.540146  839515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:14:09.540271  839515 kubeadm.go:593] duration metric: took 37.87462ms to restartPrimaryControlPlane
	I0929 13:14:09.540292  839515 kubeadm.go:394] duration metric: took 123.821391ms to StartCluster
	I0929 13:14:09.540318  839515 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.540461  839515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.543243  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.543701  839515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:14:09.543964  839515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:14:09.544056  839515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544105  839515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544134  839515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504443"
	I0929 13:14:09.544215  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:09.544297  839515 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544313  839515 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544323  839515 addons.go:247] addon dashboard should already be in state true
	I0929 13:14:09.544356  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544499  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.544580  839515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544601  839515 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544610  839515 addons.go:247] addon metrics-server should already be in state true
	I0929 13:14:09.544638  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544779  839515 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544826  839515 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:14:09.544867  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544923  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545131  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545706  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.546905  839515 out.go:179] * Verifying Kubernetes components...
	I0929 13:14:09.548849  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:09.588222  839515 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.588254  839515 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:14:09.588394  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.589235  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.591356  839515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:14:09.592899  839515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.592920  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:14:09.592997  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.599097  839515 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:14:09.603537  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:09.603567  839515 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:09.603641  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.623364  839515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:14:09.625378  839515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:14:09.626964  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:09.626991  839515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:09.627087  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.646947  839515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.647072  839515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:09.647170  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.657171  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.660429  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.682698  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.694425  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.758623  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:09.782535  839515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:09.796122  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.824319  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:09.824349  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:09.831248  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:09.831269  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:09.857539  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.865401  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:09.865601  839515 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:09.868433  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:09.868454  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:09.911818  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.911849  839515 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:09.919662  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:09.919693  839515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:09.945916  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.956819  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:09.956847  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:09.983049  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:09.983088  839515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:10.008150  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:10.008187  839515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:10.035225  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:10.035255  839515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:10.063000  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:10.063033  839515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:10.088151  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:10.088182  839515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:10.111599  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:12.055468  839515 node_ready.go:49] node "default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:12.055507  839515 node_ready.go:38] duration metric: took 2.272916493s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:12.055524  839515 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:12.055588  839515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:12.693113  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.896952632s)
	I0929 13:14:12.693205  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.835545565s)
	I0929 13:14:12.693264  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.747320981s)
	I0929 13:14:12.693289  839515 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-504443"
	I0929 13:14:12.693401  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.581752595s)
	I0929 13:14:12.693437  839515 api_server.go:72] duration metric: took 3.149694543s to wait for apiserver process to appear ...
	I0929 13:14:12.693448  839515 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:12.693465  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:12.695374  839515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-504443 addons enable metrics-server
	
	I0929 13:14:12.698283  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:12.698311  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:12.701668  839515 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0929 13:14:09.762777  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:12.254708  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:12.703272  839515 addons.go:514] duration metric: took 3.159290714s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:13.194062  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.199962  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.200005  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:13.693647  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.699173  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.699207  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:14.193661  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:14.198386  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:14:14.199540  839515 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:14.199566  839515 api_server.go:131] duration metric: took 1.506111317s to wait for apiserver health ...
	I0929 13:14:14.199576  839515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:14.203404  839515 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:14.203444  839515 system_pods.go:61] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.203452  839515 system_pods.go:61] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.203458  839515 system_pods.go:61] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.203465  839515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.203471  839515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.203482  839515 system_pods.go:61] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.203495  839515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.203503  839515 system_pods.go:61] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.203512  839515 system_pods.go:61] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.203520  839515 system_pods.go:74] duration metric: took 3.93835ms to wait for pod list to return data ...
	I0929 13:14:14.203531  839515 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:14.206279  839515 default_sa.go:45] found service account: "default"
	I0929 13:14:14.206304  839515 default_sa.go:55] duration metric: took 2.763244ms for default service account to be created ...
	I0929 13:14:14.206315  839515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:14.209977  839515 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:14.210027  839515 system_pods.go:89] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.210040  839515 system_pods.go:89] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.210048  839515 system_pods.go:89] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.210057  839515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.210066  839515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.210073  839515 system_pods.go:89] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.210082  839515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.210089  839515 system_pods.go:89] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.210121  839515 system_pods.go:89] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.210130  839515 system_pods.go:126] duration metric: took 3.808134ms to wait for k8s-apps to be running ...
	I0929 13:14:14.210140  839515 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:14.210201  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:14.225164  839515 system_svc.go:56] duration metric: took 15.009784ms WaitForService to wait for kubelet
	I0929 13:14:14.225205  839515 kubeadm.go:578] duration metric: took 4.681459973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:14.225249  839515 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:14.228249  839515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:14.228290  839515 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:14.228307  839515 node_conditions.go:105] duration metric: took 3.048343ms to run NodePressure ...
	I0929 13:14:14.228326  839515 start.go:241] waiting for startup goroutines ...
	I0929 13:14:14.228336  839515 start.go:246] waiting for cluster config update ...
	I0929 13:14:14.228350  839515 start.go:255] writing updated cluster config ...
	I0929 13:14:14.228612  839515 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:14.233754  839515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:14.238169  839515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:16.244346  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:14.257696  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:16.754720  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:18.244963  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:20.245434  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:19.254143  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:21.754181  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:22.245771  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:24.743982  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:26.745001  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:23.755533  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:26.254152  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:29.244352  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:31.244535  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:28.753653  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:30.754009  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:33.744429  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:35.745000  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:33.254079  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:35.753251  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:37.754125  837560 pod_ready.go:94] pod "coredns-66bc5c9577-vrkvb" is "Ready"
	I0929 13:14:37.754153  837560 pod_ready.go:86] duration metric: took 32.006559006s for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.757295  837560 pod_ready.go:83] waiting for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.762511  837560 pod_ready.go:94] pod "etcd-embed-certs-144376" is "Ready"
	I0929 13:14:37.762543  837560 pod_ready.go:86] duration metric: took 5.214008ms for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.765205  837560 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.769732  837560 pod_ready.go:94] pod "kube-apiserver-embed-certs-144376" is "Ready"
	I0929 13:14:37.769763  837560 pod_ready.go:86] duration metric: took 4.5304ms for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.772045  837560 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.952582  837560 pod_ready.go:94] pod "kube-controller-manager-embed-certs-144376" is "Ready"
	I0929 13:14:37.952613  837560 pod_ready.go:86] duration metric: took 180.54484ms for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.152075  837560 pod_ready.go:83] waiting for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.552510  837560 pod_ready.go:94] pod "kube-proxy-bdkrl" is "Ready"
	I0929 13:14:38.552543  837560 pod_ready.go:86] duration metric: took 400.438224ms for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.751930  837560 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152918  837560 pod_ready.go:94] pod "kube-scheduler-embed-certs-144376" is "Ready"
	I0929 13:14:39.152978  837560 pod_ready.go:86] duration metric: took 401.010043ms for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152998  837560 pod_ready.go:40] duration metric: took 33.409779031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:39.200854  837560 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:39.202814  837560 out.go:179] * Done! kubectl is now configured to use "embed-certs-144376" cluster and "default" namespace by default
	W0929 13:14:38.244646  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:40.745094  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:43.243922  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:45.744130  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	I0929 13:14:46.743671  839515 pod_ready.go:94] pod "coredns-66bc5c9577-prpff" is "Ready"
	I0929 13:14:46.743700  839515 pod_ready.go:86] duration metric: took 32.505501945s for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.746421  839515 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.752034  839515 pod_ready.go:94] pod "etcd-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.752061  839515 pod_ready.go:86] duration metric: took 5.610516ms for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.754137  839515 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.758705  839515 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.758739  839515 pod_ready.go:86] duration metric: took 4.576444ms for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.761180  839515 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.941521  839515 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.941552  839515 pod_ready.go:86] duration metric: took 180.339824ms for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.141974  839515 pod_ready.go:83] waiting for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.541782  839515 pod_ready.go:94] pod "kube-proxy-vcsfr" is "Ready"
	I0929 13:14:47.541812  839515 pod_ready.go:86] duration metric: took 399.809326ms for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.742034  839515 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142534  839515 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:48.142565  839515 pod_ready.go:86] duration metric: took 400.492621ms for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142578  839515 pod_ready.go:40] duration metric: took 33.908786928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:48.192681  839515 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:48.194961  839515 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-504443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 13:22:09 embed-certs-144376 crio[562]: time="2025-09-29 13:22:09.312764579Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3e0b984b-5858-4a44-9fd0-9c7048e3c992 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:20 embed-certs-144376 crio[562]: time="2025-09-29 13:22:20.313013792Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=68bb156d-0ecd-4aeb-b765-8a4954c07716 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:20 embed-certs-144376 crio[562]: time="2025-09-29 13:22:20.313266381Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=68bb156d-0ecd-4aeb-b765-8a4954c07716 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:21 embed-certs-144376 crio[562]: time="2025-09-29 13:22:21.312365949Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ddf1b525-47e8-413a-a857-2a96ff234fd9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:21 embed-certs-144376 crio[562]: time="2025-09-29 13:22:21.312641113Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ddf1b525-47e8-413a-a857-2a96ff234fd9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:31 embed-certs-144376 crio[562]: time="2025-09-29 13:22:31.312034177Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8d5ccf39-e437-4115-8fac-260eca6e1c62 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:31 embed-certs-144376 crio[562]: time="2025-09-29 13:22:31.312288006Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8d5ccf39-e437-4115-8fac-260eca6e1c62 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:32 embed-certs-144376 crio[562]: time="2025-09-29 13:22:32.312608341Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a5755eaa-6230-4695-924b-11863b087361 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:32 embed-certs-144376 crio[562]: time="2025-09-29 13:22:32.312921225Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a5755eaa-6230-4695-924b-11863b087361 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:32 embed-certs-144376 crio[562]: time="2025-09-29 13:22:32.313650449Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2771adca-c43e-4c73-bff0-48af747da3a5 name=/runtime.v1.ImageService/PullImage
	Sep 29 13:22:32 embed-certs-144376 crio[562]: time="2025-09-29 13:22:32.336720745Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:22:43 embed-certs-144376 crio[562]: time="2025-09-29 13:22:43.311725087Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e477eb19-c815-4ff1-9cb1-bdd4b3b3aa20 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:43 embed-certs-144376 crio[562]: time="2025-09-29 13:22:43.312076558Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e477eb19-c815-4ff1-9cb1-bdd4b3b3aa20 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:58 embed-certs-144376 crio[562]: time="2025-09-29 13:22:58.312166098Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ec5452cb-20a7-40f0-b9ea-ee7e97672719 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:58 embed-certs-144376 crio[562]: time="2025-09-29 13:22:58.312492151Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ec5452cb-20a7-40f0-b9ea-ee7e97672719 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:10 embed-certs-144376 crio[562]: time="2025-09-29 13:23:10.312479436Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ab78a1ea-f24c-4a0d-b21b-5831e0caf647 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:10 embed-certs-144376 crio[562]: time="2025-09-29 13:23:10.312780421Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ab78a1ea-f24c-4a0d-b21b-5831e0caf647 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:18 embed-certs-144376 crio[562]: time="2025-09-29 13:23:18.311877679Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=924fc9ff-d8e5-46ea-b1b7-5d5992b84d5c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:18 embed-certs-144376 crio[562]: time="2025-09-29 13:23:18.312249638Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=924fc9ff-d8e5-46ea-b1b7-5d5992b84d5c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:25 embed-certs-144376 crio[562]: time="2025-09-29 13:23:25.312445517Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4a92f836-0eae-48f1-b993-414a592ec8f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:25 embed-certs-144376 crio[562]: time="2025-09-29 13:23:25.312742098Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4a92f836-0eae-48f1-b993-414a592ec8f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:31 embed-certs-144376 crio[562]: time="2025-09-29 13:23:31.312002804Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e7eb1be1-1cc6-4813-8998-09f9b3785a01 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:31 embed-certs-144376 crio[562]: time="2025-09-29 13:23:31.313217917Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e7eb1be1-1cc6-4813-8998-09f9b3785a01 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:39 embed-certs-144376 crio[562]: time="2025-09-29 13:23:39.312213673Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2dec099a-250e-439f-b68f-f77945d8bb46 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:39 embed-certs-144376 crio[562]: time="2025-09-29 13:23:39.312479919Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2dec099a-250e-439f-b68f-f77945d8bb46 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c500d5db36009       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   71fa28180ae9f       dashboard-metrics-scraper-6ffb444bf9-swpg7
	20f828febad04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   cc76de805b765       storage-provisioner
	2c28c442a0836       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   9683036d15d13       busybox
	b81151f3f1788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   cc76de805b765       storage-provisioner
	6e8018e1ba402       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   5dbbe42bd9107       coredns-66bc5c9577-vrkvb
	ddf7c93195045       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   771de56399555       kindnet-cs6jd
	64084dd0f47ff       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   36ff22bd74db6       kube-proxy-bdkrl
	fc40bfd0b66a2       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   ff4b4c5fab795       kube-controller-manager-embed-certs-144376
	7292cb10a6712       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   49f9784a1f205       kube-scheduler-embed-certs-144376
	1598cd93517dd       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   dcee044811ca2       kube-apiserver-embed-certs-144376
	7d31b585aa936       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   a25d91e868943       etcd-embed-certs-144376
	
	
	==> coredns [6e8018e1ba402bbd1d336a9cd3a379b09dd4678592e47cdd2d79211c76d02da8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39425 - 47726 "HINFO IN 3466498718447411044.2783620433881952790. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10767793s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-144376
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-144376
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-144376
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-144376
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:23:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:23:03 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:23:03 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:23:03 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:23:03 +0000   Mon, 29 Sep 2025 13:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-144376
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7dea206b7bf44d46a0d219c98d3402a3
	  System UUID:                620c5672-8e57-43c3-9cff-b9f1422658b4
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-vrkvb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-embed-certs-144376                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-cs6jd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-embed-certs-144376             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-embed-certs-144376    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bdkrl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-embed-certs-144376             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-8wkwn               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-swpg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zmzj7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node embed-certs-144376 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node embed-certs-144376 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node embed-certs-144376 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                    node-controller  Node embed-certs-144376 event: Registered Node embed-certs-144376 in Controller
	  Normal  NodeReady                10m                    kubelet          Node embed-certs-144376 status is now: NodeReady
	  Normal  Starting                 9m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m41s (x8 over 9m41s)  kubelet          Node embed-certs-144376 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x8 over 9m41s)  kubelet          Node embed-certs-144376 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x8 over 9m41s)  kubelet          Node embed-certs-144376 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m34s                  node-controller  Node embed-certs-144376 event: Registered Node embed-certs-144376 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [7d31b585aa936e5b5f19f942cd8dd7597ad140998930c0f2f49c079b6d39d776] <==
	{"level":"warn","ts":"2025-09-29T13:14:02.299236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.310019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.320124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.329102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.337099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.344937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.352474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.360820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.369964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.378913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.388342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.396950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.405561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.413795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.422518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.431198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.440509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.449535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.457794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.466729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.474823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.485391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.492855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.500057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.554235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:23:41 up  3:06,  0 users,  load average: 0.88, 0.93, 1.49
	Linux embed-certs-144376 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ddf7c931950453b8415673fba84207479f2d7842e988e0588478d28906379b07] <==
	I0929 13:21:34.234049       1 main.go:301] handling current node
	I0929 13:21:44.233477       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:21:44.233543       1 main.go:301] handling current node
	I0929 13:21:54.231053       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:21:54.231092       1 main.go:301] handling current node
	I0929 13:22:04.240201       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:04.240234       1 main.go:301] handling current node
	I0929 13:22:14.238084       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:14.238122       1 main.go:301] handling current node
	I0929 13:22:24.233655       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:24.233709       1 main.go:301] handling current node
	I0929 13:22:34.235587       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:34.235629       1 main.go:301] handling current node
	I0929 13:22:44.232810       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:44.232863       1 main.go:301] handling current node
	I0929 13:22:54.235873       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:22:54.235939       1 main.go:301] handling current node
	I0929 13:23:04.231273       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:23:04.231308       1 main.go:301] handling current node
	I0929 13:23:14.237984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:23:14.238025       1 main.go:301] handling current node
	I0929 13:23:24.235981       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:23:24.236035       1 main.go:301] handling current node
	I0929 13:23:34.231519       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:23:34.231580       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1598cd93517dd22b6e988bd9bf309975c6618919d8b76695d9a395e2d0bbb04c] <==
	I0929 13:19:41.828496       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:20:04.060875       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:20:04.060995       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:20:04.061015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:20:04.061146       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:20:04.061242       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:20:04.063037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:20:11.809671       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:44.214981       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:21:35.700164       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:22:04.061564       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:04.061630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:22:04.061645       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:22:04.063819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:04.064015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:22:04.064037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:22:09.808553       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:22:55.588844       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:23:24.068433       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [fc40bfd0b66a2683e92b69459409e9f07839d9e5eface8f1106d2b80951c1b80] <==
	I0929 13:17:37.484876       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:07.453240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:07.493014       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:37.458475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:37.501869       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:07.463735       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:07.510416       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:37.469274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:37.518392       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:07.474108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:07.526810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:37.478650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:37.534362       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:21:07.483943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:21:07.542673       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:21:37.488354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:21:37.551110       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:22:07.494581       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:22:07.559850       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:22:37.499397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:22:37.568098       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:07.504148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:07.575461       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:37.508822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:37.582858       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [64084dd0f47ff8074a122fb5e82e870a23b3dc3c07700e3bd18b887c37e590cd] <==
	I0929 13:14:03.884968       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:14:03.963039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:14:04.063458       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:14:04.063523       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 13:14:04.063656       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:14:04.089169       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:14:04.089240       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:14:04.095842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:14:04.096425       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:14:04.096465       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:04.098468       1 config.go:200] "Starting service config controller"
	I0929 13:14:04.098491       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:14:04.098518       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:14:04.098524       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:14:04.098539       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:14:04.098543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:14:04.099629       1 config.go:309] "Starting node config controller"
	I0929 13:14:04.099652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:14:04.099660       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:14:04.198690       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:14:04.198717       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:14:04.198722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7292cb10a67121f433d0bde2a2c955806dc4f4fd8f6d44d1b72039a3de28e08a] <==
	I0929 13:14:01.869165       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:14:03.011421       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:14:03.011458       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:14:03.011469       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:14:03.011480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:14:03.059117       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:14:03.059148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:03.061329       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:03.061381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:03.061778       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:14:03.061809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:14:03.162004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:23:00 embed-certs-144376 kubelet[710]: E0929 13:23:00.383997     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152180383700949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:00 embed-certs-144376 kubelet[710]: E0929 13:23:00.384038     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152180383700949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:02 embed-certs-144376 kubelet[710]: I0929 13:23:02.311452     710 scope.go:117] "RemoveContainer" containerID="c500d5db360098f95c2c4e76da68aa561acd02256cf53247c7184ba583016cdb"
	Sep 29 13:23:02 embed-certs-144376 kubelet[710]: E0929 13:23:02.311663     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:23:03 embed-certs-144376 kubelet[710]: E0929 13:23:03.677249     710 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:23:03 embed-certs-144376 kubelet[710]: E0929 13:23:03.677327     710 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:23:03 embed-certs-144376 kubelet[710]: E0929 13:23:03.677455     710 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-zmzj7_kubernetes-dashboard(3d7707ff-be06-433e-a8ea-a5478e606f81): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 13:23:03 embed-certs-144376 kubelet[710]: E0929 13:23:03.677514     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:23:10 embed-certs-144376 kubelet[710]: E0929 13:23:10.313094     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:23:10 embed-certs-144376 kubelet[710]: E0929 13:23:10.385642     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152190385366158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:10 embed-certs-144376 kubelet[710]: E0929 13:23:10.385679     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152190385366158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:17 embed-certs-144376 kubelet[710]: I0929 13:23:17.311419     710 scope.go:117] "RemoveContainer" containerID="c500d5db360098f95c2c4e76da68aa561acd02256cf53247c7184ba583016cdb"
	Sep 29 13:23:17 embed-certs-144376 kubelet[710]: E0929 13:23:17.311589     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:23:18 embed-certs-144376 kubelet[710]: E0929 13:23:18.312634     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:23:20 embed-certs-144376 kubelet[710]: E0929 13:23:20.387473     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152200387194658  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:20 embed-certs-144376 kubelet[710]: E0929 13:23:20.387518     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152200387194658  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:25 embed-certs-144376 kubelet[710]: E0929 13:23:25.313197     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:23:29 embed-certs-144376 kubelet[710]: I0929 13:23:29.311409     710 scope.go:117] "RemoveContainer" containerID="c500d5db360098f95c2c4e76da68aa561acd02256cf53247c7184ba583016cdb"
	Sep 29 13:23:29 embed-certs-144376 kubelet[710]: E0929 13:23:29.311678     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:23:30 embed-certs-144376 kubelet[710]: E0929 13:23:30.389176     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152210388853324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:30 embed-certs-144376 kubelet[710]: E0929 13:23:30.389214     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152210388853324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:31 embed-certs-144376 kubelet[710]: E0929 13:23:31.314167     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:23:39 embed-certs-144376 kubelet[710]: E0929 13:23:39.312854     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:23:40 embed-certs-144376 kubelet[710]: E0929 13:23:40.390981     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152220390685060  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:40 embed-certs-144376 kubelet[710]: E0929 13:23:40.391023     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152220390685060  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	
	
	==> storage-provisioner [20f828febad049e885af5b33e66f01607bc06a14adebea310f5c13dcae86ffa0] <==
	W0929 13:23:16.104497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:18.108058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:18.112509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:20.115852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:20.120499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:22.126312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:22.130689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:24.134041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:24.139711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:26.143369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:26.149209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:28.153214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:28.157552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:30.161251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:30.165535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:32.169483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:32.174481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:34.178044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:34.182674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:36.186119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:36.190664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:38.194611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:38.199734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:40.203539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:40.208322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b81151f3f178816a8153b88c2d79acae49eec4dda7952abb12ac6c961be4e6b7] <==
	I0929 13:14:03.875110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:14:33.879306       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-144376 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7: exit status 1 (65.234275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8wkwn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zmzj7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gmnqw" [a16fafc6-e94a-47ed-8838-4df0ecd6eb6c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:18:15.385462  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:19:09.425218  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:23:48.8844641 +0000 UTC m=+3499.184500566
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe po kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-504443 describe po kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-gmnqw
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-504443/192.168.76.2
Start Time:       Mon, 29 Sep 2025 13:14:16 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fqmq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-7fqmq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m31s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw to default-k8s-diff-port-504443
Normal   Pulling    4m21s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m49s (x5 over 8m57s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m49s (x5 over 8m57s)   kubelet            Error: ErrImagePull
Warning  Failed     2m48s (x16 over 8m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    105s (x21 over 8m56s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard: exit status 1 (78.481632ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-gmnqw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-504443
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-504443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83",
	        "Created": "2025-09-29T13:12:58.237146464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 839701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:14:02.102317201Z",
	            "FinishedAt": "2025-09-29T13:14:01.114928788Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/hosts",
	        "LogPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83-json.log",
	        "Name": "/default-k8s-diff-port-504443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-504443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-504443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83",
	                "LowerDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-504443",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-504443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-504443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-504443",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-504443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f72f4f2a4951fd873e69965deb29d5776ecf83fad8d2032cc4a76e80e521b67",
	            "SandboxKey": "/var/run/docker/netns/8f72f4f2a495",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-504443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:be:76:8c:f8:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f5b4e4a14093b2a56f28b72dc27e49b82a8eb021b4f2e4b7640eb093e58224e4",
	                    "EndpointID": "5f2fef026ebd7b095b3ab2eed3068663a57fe40b044e5215cf3316724d92ba61",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-504443",
	                        "ec073290678c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504443 logs -n 25: (1.320606674s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-223488 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:14:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:14:01.801416  839515 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:14:01.801548  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801557  839515 out.go:374] Setting ErrFile to fd 2...
	I0929 13:14:01.801561  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801790  839515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:14:01.802369  839515 out.go:368] Setting JSON to false
	I0929 13:14:01.803835  839515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10587,"bootTime":1759141055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:14:01.803980  839515 start.go:140] virtualization: kvm guest
	I0929 13:14:01.806446  839515 out.go:179] * [default-k8s-diff-port-504443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:14:01.808471  839515 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:14:01.808488  839515 notify.go:220] Checking for updates...
	I0929 13:14:01.811422  839515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:14:01.813137  839515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:01.815358  839515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:14:01.817089  839515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:14:01.818747  839515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:14:01.820859  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:01.821367  839515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:14:01.850294  839515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:14:01.850496  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:01.920086  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.906779425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:01.920249  839515 docker.go:318] overlay module found
	I0929 13:14:01.923199  839515 out.go:179] * Using the docker driver based on existing profile
	I0929 13:14:01.924580  839515 start.go:304] selected driver: docker
	I0929 13:14:01.924604  839515 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:01.924742  839515 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:14:01.925594  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:02.004135  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.989084501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:02.004575  839515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:02.004635  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:02.004699  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:02.004749  839515 start.go:348] cluster config:
	{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:02.006556  839515 out.go:179] * Starting "default-k8s-diff-port-504443" primary control-plane node in "default-k8s-diff-port-504443" cluster
	I0929 13:14:02.007837  839515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:14:02.009404  839515 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:14:02.011260  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:02.011353  839515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:14:02.011371  839515 cache.go:58] Caching tarball of preloaded images
	I0929 13:14:02.011418  839515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:14:02.011589  839515 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:14:02.011606  839515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:14:02.011761  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.040696  839515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:14:02.040723  839515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:14:02.040747  839515 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:14:02.040778  839515 start.go:360] acquireMachinesLock for default-k8s-diff-port-504443: {Name:mkd1504d0afcb57e7e3a7d375c0d3d045f6ff0f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:14:02.040840  839515 start.go:364] duration metric: took 41.435µs to acquireMachinesLock for "default-k8s-diff-port-504443"
	I0929 13:14:02.040859  839515 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:14:02.040866  839515 fix.go:54] fixHost starting: 
	I0929 13:14:02.041151  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.065452  839515 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504443: state=Stopped err=<nil>
	W0929 13:14:02.065493  839515 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:14:00.890602  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:00.890614  837560 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:00.890670  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.892229  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:00.892253  837560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:00.892339  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.932762  837560 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:00.932828  837560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:00.932989  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.934137  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.945316  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.948654  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.961271  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:01.034193  837560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:01.056199  837560 node_ready.go:35] waiting up to 6m0s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:01.062352  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:01.074784  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:01.074816  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:01.080006  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:01.080035  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:01.096572  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:01.107273  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:01.107304  837560 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:01.123628  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:01.123736  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:01.159235  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.159267  837560 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:01.162841  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:01.163496  837560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:01.197386  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.198337  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:01.198359  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:01.226863  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:01.226900  837560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:01.252970  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:01.252998  837560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:01.278501  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:01.278527  837560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:01.303325  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:01.303366  837560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:01.329503  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:01.329532  837560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:01.353791  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:03.007947  837560 node_ready.go:49] node "embed-certs-144376" is "Ready"
	I0929 13:14:03.007988  837560 node_ready.go:38] duration metric: took 1.951746003s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:03.008006  837560 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:03.008068  837560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:03.686627  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.624233175s)
	I0929 13:14:03.686706  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.590098715s)
	I0929 13:14:03.686993  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489568477s)
	I0929 13:14:03.687027  837560 addons.go:479] Verifying addon metrics-server=true in "embed-certs-144376"
	I0929 13:14:03.687147  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.333304219s)
	I0929 13:14:03.687396  837560 api_server.go:72] duration metric: took 2.840723243s to wait for apiserver process to appear ...
	I0929 13:14:03.687413  837560 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:03.687434  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:03.689946  837560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-144376 addons enable metrics-server
	
	I0929 13:14:03.693918  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:03.693955  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:03.703949  837560 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:14:02.067503  839515 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-504443" ...
	I0929 13:14:02.067595  839515 cli_runner.go:164] Run: docker start default-k8s-diff-port-504443
	I0929 13:14:02.400205  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.426021  839515 kic.go:430] container "default-k8s-diff-port-504443" state is running.
	I0929 13:14:02.426697  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:02.452245  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.452576  839515 machine.go:93] provisionDockerMachine start ...
	I0929 13:14:02.452686  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:02.476313  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:02.476569  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:02.476592  839515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:14:02.477420  839515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45360->127.0.0.1:33463: read: connection reset by peer
	I0929 13:14:05.620847  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.620906  839515 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-504443"
	I0929 13:14:05.621012  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.641909  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.642258  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.642275  839515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504443 && echo "default-k8s-diff-port-504443" | sudo tee /etc/hostname
	I0929 13:14:05.804833  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.804936  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.826632  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.826863  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.826904  839515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504443/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:14:05.968467  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:14:05.968502  839515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:14:05.968535  839515 ubuntu.go:190] setting up certificates
	I0929 13:14:05.968548  839515 provision.go:84] configureAuth start
	I0929 13:14:05.968610  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:05.988690  839515 provision.go:143] copyHostCerts
	I0929 13:14:05.988763  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:14:05.988788  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:14:05.988904  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:14:05.989039  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:14:05.989049  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:14:05.989082  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:14:05.989162  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:14:05.989170  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:14:05.989196  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:14:05.989339  839515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504443 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-504443 localhost minikube]
	I0929 13:14:06.185911  839515 provision.go:177] copyRemoteCerts
	I0929 13:14:06.185989  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:14:06.186098  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.205790  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:06.309505  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:14:06.340444  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:14:06.372277  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:14:06.402506  839515 provision.go:87] duration metric: took 433.943194ms to configureAuth
	I0929 13:14:06.402539  839515 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:14:06.402765  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:06.402931  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.424941  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:06.425216  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:06.425243  839515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:14:06.741449  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:14:06.741480  839515 machine.go:96] duration metric: took 4.288878167s to provisionDockerMachine
	I0929 13:14:06.741495  839515 start.go:293] postStartSetup for "default-k8s-diff-port-504443" (driver="docker")
	I0929 13:14:06.741509  839515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:14:06.741575  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:14:06.741626  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.764273  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:03.706436  837560 addons.go:514] duration metric: took 2.859616556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:04.188145  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.194079  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.194114  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:04.687754  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.692514  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.692547  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.188198  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.193003  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:05.193033  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.687682  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.692821  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:14:05.694070  837560 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:05.694103  837560 api_server.go:131] duration metric: took 2.006683698s to wait for apiserver health ...
	I0929 13:14:05.694113  837560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:05.699584  837560 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:05.699638  837560 system_pods.go:61] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.699655  837560 system_pods.go:61] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.699667  837560 system_pods.go:61] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.699676  837560 system_pods.go:61] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.699687  837560 system_pods.go:61] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.699697  837560 system_pods.go:61] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.699711  837560 system_pods.go:61] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.699721  837560 system_pods.go:61] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.699734  837560 system_pods.go:61] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.699743  837560 system_pods.go:74] duration metric: took 5.622791ms to wait for pod list to return data ...
	I0929 13:14:05.699757  837560 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:05.703100  837560 default_sa.go:45] found service account: "default"
	I0929 13:14:05.703127  837560 default_sa.go:55] duration metric: took 3.363521ms for default service account to be created ...
	I0929 13:14:05.703137  837560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:05.712514  837560 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:05.712559  837560 system_pods.go:89] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.712571  837560 system_pods.go:89] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.712579  837560 system_pods.go:89] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.712592  837560 system_pods.go:89] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.712601  837560 system_pods.go:89] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.712614  837560 system_pods.go:89] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.712629  837560 system_pods.go:89] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.712643  837560 system_pods.go:89] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.712648  837560 system_pods.go:89] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.712659  837560 system_pods.go:126] duration metric: took 9.514361ms to wait for k8s-apps to be running ...
	I0929 13:14:05.712669  837560 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:05.712730  837560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:05.733971  837560 system_svc.go:56] duration metric: took 21.287495ms WaitForService to wait for kubelet
	I0929 13:14:05.734004  837560 kubeadm.go:578] duration metric: took 4.887332987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:05.734047  837560 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:05.737599  837560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:05.737632  837560 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:05.737645  837560 node_conditions.go:105] duration metric: took 3.59217ms to run NodePressure ...
	I0929 13:14:05.737660  837560 start.go:241] waiting for startup goroutines ...
	I0929 13:14:05.737667  837560 start.go:246] waiting for cluster config update ...
	I0929 13:14:05.737679  837560 start.go:255] writing updated cluster config ...
	I0929 13:14:05.738043  837560 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:05.743175  837560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:05.747563  837560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:07.753718  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:06.865904  839515 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:14:06.869732  839515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:14:06.869776  839515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:14:06.869789  839515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:14:06.869797  839515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:14:06.869820  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:14:06.869914  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:14:06.870040  839515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:14:06.870152  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:14:06.881041  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:06.910664  839515 start.go:296] duration metric: took 169.149248ms for postStartSetup
	I0929 13:14:06.910763  839515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:14:06.910806  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.930467  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.026128  839515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:14:07.031766  839515 fix.go:56] duration metric: took 4.990890676s for fixHost
	I0929 13:14:07.031793  839515 start.go:83] releasing machines lock for "default-k8s-diff-port-504443", held for 4.990942592s
	I0929 13:14:07.031878  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:07.050982  839515 ssh_runner.go:195] Run: cat /version.json
	I0929 13:14:07.051039  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.051090  839515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:14:07.051158  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.072609  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.072906  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.245633  839515 ssh_runner.go:195] Run: systemctl --version
	I0929 13:14:07.251713  839515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:14:07.405376  839515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:14:07.412347  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.424730  839515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:14:07.424820  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.436822  839515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:14:07.436852  839515 start.go:495] detecting cgroup driver to use...
	I0929 13:14:07.436922  839515 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:14:07.437079  839515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:14:07.451837  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:14:07.466730  839515 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:14:07.466785  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:14:07.482295  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:14:07.497182  839515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:14:07.573510  839515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:14:07.647720  839515 docker.go:234] disabling docker service ...
	I0929 13:14:07.647793  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:14:07.663956  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:14:07.678340  839515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:14:07.749850  839515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:14:07.833138  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:14:07.847332  839515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:14:07.869460  839515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:14:07.869534  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.882223  839515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:14:07.882304  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.895125  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.908850  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.925290  839515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:14:07.942174  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.956313  839515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.970510  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.984185  839515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:14:07.995199  839515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:14:08.006273  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.079146  839515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:14:08.201036  839515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:14:08.201135  839515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:14:08.205983  839515 start.go:563] Will wait 60s for crictl version
	I0929 13:14:08.206058  839515 ssh_runner.go:195] Run: which crictl
	I0929 13:14:08.210186  839515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:14:08.251430  839515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:14:08.251529  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.296851  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.339448  839515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:14:08.341414  839515 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-504443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:14:08.362344  839515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:14:08.367546  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.381721  839515 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:14:08.381862  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:08.381951  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.433062  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.433096  839515 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:14:08.433161  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.473938  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.473972  839515 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:14:08.473983  839515 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0929 13:14:08.474084  839515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-504443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:14:08.474149  839515 ssh_runner.go:195] Run: crio config
	I0929 13:14:08.535858  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:08.535928  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:08.535954  839515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:14:08.535987  839515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504443 NodeName:default-k8s-diff-port-504443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:14:08.536149  839515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504443"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:14:08.536221  839515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:14:08.549875  839515 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:14:08.549968  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:14:08.562591  839515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 13:14:08.588448  839515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:14:08.613818  839515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 13:14:08.637842  839515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:14:08.642571  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.658613  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.742685  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:08.769381  839515 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443 for IP: 192.168.76.2
	I0929 13:14:08.769408  839515 certs.go:194] generating shared ca certs ...
	I0929 13:14:08.769432  839515 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:08.769610  839515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:14:08.769690  839515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:14:08.769707  839515 certs.go:256] generating profile certs ...
	I0929 13:14:08.769830  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.key
	I0929 13:14:08.769913  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key.3fc9c8d4
	I0929 13:14:08.769963  839515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key
	I0929 13:14:08.770120  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:14:08.770170  839515 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:14:08.770186  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:14:08.770222  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:14:08.770264  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:14:08.770297  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:14:08.770375  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:08.771164  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:14:08.810187  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:14:08.852550  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:14:08.909671  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:14:08.944558  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:14:08.979658  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:14:09.015199  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:14:09.050930  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:14:09.086524  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:14:09.119207  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:14:09.151483  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:14:09.186734  839515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:14:09.211662  839515 ssh_runner.go:195] Run: openssl version
	I0929 13:14:09.219872  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:14:09.232974  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237506  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237581  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.247699  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:14:09.262697  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:14:09.277818  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283413  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283551  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.293753  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:14:09.307826  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:14:09.322785  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328680  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328758  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.337578  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:14:09.349565  839515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:14:09.355212  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:14:09.365031  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:14:09.376499  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:14:09.386571  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:14:09.396193  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:14:09.405722  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:14:09.416490  839515 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:09.416619  839515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:14:09.416692  839515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:14:09.480165  839515 cri.go:89] found id: ""
	I0929 13:14:09.480329  839515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:14:09.502356  839515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:14:09.502385  839515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:14:09.502465  839515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:14:09.516584  839515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:14:09.517974  839515 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-504443" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.518950  839515 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-564029/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-504443" cluster setting kubeconfig missing "default-k8s-diff-port-504443" context setting]
	I0929 13:14:09.520381  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.523350  839515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:14:09.540146  839515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:14:09.540271  839515 kubeadm.go:593] duration metric: took 37.87462ms to restartPrimaryControlPlane
	I0929 13:14:09.540292  839515 kubeadm.go:394] duration metric: took 123.821391ms to StartCluster
	I0929 13:14:09.540318  839515 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.540461  839515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.543243  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.543701  839515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:14:09.543964  839515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:14:09.544056  839515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544105  839515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544134  839515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504443"
	I0929 13:14:09.544215  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:09.544297  839515 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544313  839515 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544323  839515 addons.go:247] addon dashboard should already be in state true
	I0929 13:14:09.544356  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544499  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.544580  839515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544601  839515 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544610  839515 addons.go:247] addon metrics-server should already be in state true
	I0929 13:14:09.544638  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544779  839515 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544826  839515 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:14:09.544867  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544923  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545131  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545706  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.546905  839515 out.go:179] * Verifying Kubernetes components...
	I0929 13:14:09.548849  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:09.588222  839515 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.588254  839515 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:14:09.588394  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.589235  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.591356  839515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:14:09.592899  839515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.592920  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:14:09.592997  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.599097  839515 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:14:09.603537  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:09.603567  839515 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:09.603641  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.623364  839515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:14:09.625378  839515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:14:09.626964  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:09.626991  839515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:09.627087  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.646947  839515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.647072  839515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:09.647170  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.657171  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.660429  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.682698  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.694425  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.758623  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:09.782535  839515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:09.796122  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.824319  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:09.824349  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:09.831248  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:09.831269  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:09.857539  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.865401  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:09.865601  839515 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:09.868433  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:09.868454  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:09.911818  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.911849  839515 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:09.919662  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:09.919693  839515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:09.945916  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.956819  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:09.956847  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:09.983049  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:09.983088  839515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:10.008150  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:10.008187  839515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:10.035225  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:10.035255  839515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:10.063000  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:10.063033  839515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:10.088151  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:10.088182  839515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:10.111599  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:12.055468  839515 node_ready.go:49] node "default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:12.055507  839515 node_ready.go:38] duration metric: took 2.272916493s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:12.055524  839515 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:12.055588  839515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:12.693113  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.896952632s)
	I0929 13:14:12.693205  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.835545565s)
	I0929 13:14:12.693264  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.747320981s)
	I0929 13:14:12.693289  839515 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-504443"
	I0929 13:14:12.693401  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.581752595s)
	I0929 13:14:12.693437  839515 api_server.go:72] duration metric: took 3.149694543s to wait for apiserver process to appear ...
	I0929 13:14:12.693448  839515 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:12.693465  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:12.695374  839515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-504443 addons enable metrics-server
	
	I0929 13:14:12.698283  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:12.698311  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:12.701668  839515 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0929 13:14:09.762777  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:12.254708  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:12.703272  839515 addons.go:514] duration metric: took 3.159290714s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:13.194062  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.199962  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.200005  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:13.693647  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.699173  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.699207  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:14.193661  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:14.198386  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:14:14.199540  839515 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:14.199566  839515 api_server.go:131] duration metric: took 1.506111317s to wait for apiserver health ...
	I0929 13:14:14.199576  839515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:14.203404  839515 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:14.203444  839515 system_pods.go:61] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.203452  839515 system_pods.go:61] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.203458  839515 system_pods.go:61] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.203465  839515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.203471  839515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.203482  839515 system_pods.go:61] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.203495  839515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.203503  839515 system_pods.go:61] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.203512  839515 system_pods.go:61] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.203520  839515 system_pods.go:74] duration metric: took 3.93835ms to wait for pod list to return data ...
	I0929 13:14:14.203531  839515 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:14.206279  839515 default_sa.go:45] found service account: "default"
	I0929 13:14:14.206304  839515 default_sa.go:55] duration metric: took 2.763244ms for default service account to be created ...
	I0929 13:14:14.206315  839515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:14.209977  839515 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:14.210027  839515 system_pods.go:89] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.210040  839515 system_pods.go:89] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.210048  839515 system_pods.go:89] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.210057  839515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.210066  839515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.210073  839515 system_pods.go:89] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.210082  839515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.210089  839515 system_pods.go:89] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.210121  839515 system_pods.go:89] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.210130  839515 system_pods.go:126] duration metric: took 3.808134ms to wait for k8s-apps to be running ...
	I0929 13:14:14.210140  839515 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:14.210201  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:14.225164  839515 system_svc.go:56] duration metric: took 15.009784ms WaitForService to wait for kubelet
	I0929 13:14:14.225205  839515 kubeadm.go:578] duration metric: took 4.681459973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:14.225249  839515 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:14.228249  839515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:14.228290  839515 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:14.228307  839515 node_conditions.go:105] duration metric: took 3.048343ms to run NodePressure ...
	I0929 13:14:14.228326  839515 start.go:241] waiting for startup goroutines ...
	I0929 13:14:14.228336  839515 start.go:246] waiting for cluster config update ...
	I0929 13:14:14.228350  839515 start.go:255] writing updated cluster config ...
	I0929 13:14:14.228612  839515 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:14.233754  839515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:14.238169  839515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:16.244346  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:14.257696  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:16.754720  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:18.244963  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:20.245434  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:19.254143  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:21.754181  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:22.245771  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:24.743982  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:26.745001  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:23.755533  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:26.254152  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:29.244352  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:31.244535  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:28.753653  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:30.754009  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:33.744429  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:35.745000  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:33.254079  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:35.753251  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:37.754125  837560 pod_ready.go:94] pod "coredns-66bc5c9577-vrkvb" is "Ready"
	I0929 13:14:37.754153  837560 pod_ready.go:86] duration metric: took 32.006559006s for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.757295  837560 pod_ready.go:83] waiting for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.762511  837560 pod_ready.go:94] pod "etcd-embed-certs-144376" is "Ready"
	I0929 13:14:37.762543  837560 pod_ready.go:86] duration metric: took 5.214008ms for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.765205  837560 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.769732  837560 pod_ready.go:94] pod "kube-apiserver-embed-certs-144376" is "Ready"
	I0929 13:14:37.769763  837560 pod_ready.go:86] duration metric: took 4.5304ms for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.772045  837560 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.952582  837560 pod_ready.go:94] pod "kube-controller-manager-embed-certs-144376" is "Ready"
	I0929 13:14:37.952613  837560 pod_ready.go:86] duration metric: took 180.54484ms for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.152075  837560 pod_ready.go:83] waiting for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.552510  837560 pod_ready.go:94] pod "kube-proxy-bdkrl" is "Ready"
	I0929 13:14:38.552543  837560 pod_ready.go:86] duration metric: took 400.438224ms for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.751930  837560 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152918  837560 pod_ready.go:94] pod "kube-scheduler-embed-certs-144376" is "Ready"
	I0929 13:14:39.152978  837560 pod_ready.go:86] duration metric: took 401.010043ms for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152998  837560 pod_ready.go:40] duration metric: took 33.409779031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:39.200854  837560 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:39.202814  837560 out.go:179] * Done! kubectl is now configured to use "embed-certs-144376" cluster and "default" namespace by default
	W0929 13:14:38.244646  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:40.745094  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:43.243922  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:45.744130  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	I0929 13:14:46.743671  839515 pod_ready.go:94] pod "coredns-66bc5c9577-prpff" is "Ready"
	I0929 13:14:46.743700  839515 pod_ready.go:86] duration metric: took 32.505501945s for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.746421  839515 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.752034  839515 pod_ready.go:94] pod "etcd-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.752061  839515 pod_ready.go:86] duration metric: took 5.610516ms for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.754137  839515 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.758705  839515 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.758739  839515 pod_ready.go:86] duration metric: took 4.576444ms for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.761180  839515 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.941521  839515 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.941552  839515 pod_ready.go:86] duration metric: took 180.339824ms for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.141974  839515 pod_ready.go:83] waiting for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.541782  839515 pod_ready.go:94] pod "kube-proxy-vcsfr" is "Ready"
	I0929 13:14:47.541812  839515 pod_ready.go:86] duration metric: took 399.809326ms for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.742034  839515 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142534  839515 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:48.142565  839515 pod_ready.go:86] duration metric: took 400.492621ms for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142578  839515 pod_ready.go:40] duration metric: took 33.908786928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:48.192681  839515 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:48.194961  839515 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-504443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 13:22:16 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:16.908109210Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6344dbe2-b19d-40b7-85f8-bf41ce287ef9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:27 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:27.907988837Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c0e0ae59-331d-485b-80cc-87f2764330e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:27 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:27.908304266Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c0e0ae59-331d-485b-80cc-87f2764330e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:29 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:29.908098782Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=aed236e0-232b-435a-9ccf-50345d7f6413 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:29 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:29.908324929Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=aed236e0-232b-435a-9ccf-50345d7f6413 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:40 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:40.908018690Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6451b15d-55f3-4044-98e8-c47f41659c2b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:40 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:40.908332471Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6451b15d-55f3-4044-98e8-c47f41659c2b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:41.908143035Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b2d3719e-e9d4-40f6-9efb-d71c11fded45 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:41.908448887Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b2d3719e-e9d4-40f6-9efb-d71c11fded45 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:41.909234984Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a7c4e3ea-cb68-4310-b676-9c5b833c8f99 name=/runtime.v1.ImageService/PullImage
	Sep 29 13:22:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:41.910831205Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:22:51 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:51.908012224Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fd22dec1-4ee3-4024-b19f-66b0a338a2e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:22:51 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:22:51.908350091Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fd22dec1-4ee3-4024-b19f-66b0a338a2e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:06 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:06.908239060Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=718c6c95-8776-4bdd-8e93-29e7efe26107 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:06 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:06.908544020Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=718c6c95-8776-4bdd-8e93-29e7efe26107 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:18 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:18.908298824Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=880cff80-8160-419b-beb6-8e3bb3649cbb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:18 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:18.908592269Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=880cff80-8160-419b-beb6-8e3bb3649cbb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:25 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:25.907638432Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4b5db612-970e-4d11-9d43-d20e4662b583 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:25 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:25.908003184Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4b5db612-970e-4d11-9d43-d20e4662b583 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:33 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:33.908037097Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b7f1bec3-ab33-4ebe-b332-396513f93751 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:33 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:33.908331418Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b7f1bec3-ab33-4ebe-b332-396513f93751 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:40 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:40.908341919Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a79721bb-c4bb-4b07-9a37-1b13772ffe73 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:40 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:40.908677262Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a79721bb-c4bb-4b07-9a37-1b13772ffe73 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:45 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:45.907184372Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2ddd1671-38ef-4a8e-b8fe-59c8756de6da name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:23:45 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:23:45.907464314Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2ddd1671-38ef-4a8e-b8fe-59c8756de6da name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b5bfffc7794d0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   7a5a8a3f04b80       dashboard-metrics-scraper-6ffb444bf9-47kpl
	e932e508fe0aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   b878a4c0eee8e       storage-provisioner
	f4f260ee133fa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   4d065629e4bd1       coredns-66bc5c9577-prpff
	9a94851ef231f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   76a90f384388d       kindnet-fb5jq
	f50ff8e61753e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   62f8489fca403       busybox
	73711de9fb93e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   f5834108d3965       kube-proxy-vcsfr
	c9099c6e53076       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   b878a4c0eee8e       storage-provisioner
	45aac201e654c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   2d9df557a8345       kube-apiserver-default-k8s-diff-port-504443
	869cb9c9ee595       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   8c2e3b881d82c       etcd-default-k8s-diff-port-504443
	38c52fbbfcf31       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   1ab57bf894ea6       kube-controller-manager-default-k8s-diff-port-504443
	11ae39a5a4b2a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   223b1fd348502       kube-scheduler-default-k8s-diff-port-504443
	
	
	==> coredns [f4f260ee133fa2a71e1bed3ffaa90ed10104a38b223337e4dabea66e6e6a15da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52679 - 49434 "HINFO IN 1943250935440787998.4878101473045877455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.16263863s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-504443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-504443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_13_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:13:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:20:39 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:20:39 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:20:39 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:20:39 +0000   Mon, 29 Sep 2025 13:13:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-504443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4dd16f3f1464516aeff8dc64d8f97e7
	  System UUID:                9ce7ec70-e159-4f57-aefc-7e470dc6dd77
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-prpff                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-504443                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-fb5jq                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504443             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504443    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vcsfr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504443             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-l5t2q                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-47kpl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gmnqw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-504443 event: Registered Node default-k8s-diff-port-504443 in Controller
	  Normal  NodeReady                10m                    kubelet          Node default-k8s-diff-port-504443 status is now: NodeReady
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m41s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m34s                  node-controller  Node default-k8s-diff-port-504443 event: Registered Node default-k8s-diff-port-504443 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [869cb9c9ee5959b76e080f7c95693a4d8a3d124e77e6b95e8b1de7a394883932] <==
	{"level":"warn","ts":"2025-09-29T13:14:11.328896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.337023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.346371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.354474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.361935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.370432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.378026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.387074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.408850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.418777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.427562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.436747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.444971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.454080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.462610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.470874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.479347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.487265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.496167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.505577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.514288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.526467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.535278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.545252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.602552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:23:50 up  3:06,  0 users,  load average: 0.97, 0.94, 1.49
	Linux default-k8s-diff-port-504443 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9a94851ef231f0cc0fe0d8707d2239b0aeb90d0223808bf4cd37f09acd0a7412] <==
	I0929 13:21:43.788183       1 main.go:301] handling current node
	I0929 13:21:53.784989       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:21:53.785048       1 main.go:301] handling current node
	I0929 13:22:03.793016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:03.793073       1 main.go:301] handling current node
	I0929 13:22:13.783640       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:13.783677       1 main.go:301] handling current node
	I0929 13:22:23.787412       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:23.787458       1 main.go:301] handling current node
	I0929 13:22:33.792036       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:33.792070       1 main.go:301] handling current node
	I0929 13:22:43.783785       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:43.783824       1 main.go:301] handling current node
	I0929 13:22:53.785838       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:53.785964       1 main.go:301] handling current node
	I0929 13:23:03.784728       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:23:03.784778       1 main.go:301] handling current node
	I0929 13:23:13.784066       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:23:13.784103       1 main.go:301] handling current node
	I0929 13:23:23.788170       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:23:23.788205       1 main.go:301] handling current node
	I0929 13:23:33.789988       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:23:33.790020       1 main.go:301] handling current node
	I0929 13:23:43.784164       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:23:43.784204       1 main.go:301] handling current node
	
	
	==> kube-apiserver [45aac201e654c63a49fceb57713f628b773c234f55e702e4a52d6f4f144e56f3] <==
	I0929 13:19:35.767213       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:11.607311       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:20:13.073231       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:20:13.073291       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:20:13.073316       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:20:13.075454       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:20:13.075531       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:20:13.075544       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:20:37.969957       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:21:38.145923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:22:01.970824       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:22:13.074285       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:13.074350       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:22:13.074371       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:22:13.076543       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:13.076636       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:22:13.076650       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:23:01.606448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:23:29.641708       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [38c52fbbfcf3188086b7e7244f30aa5b16d04ee26967b32c2df673b9908a9ff6] <==
	I0929 13:17:46.552677       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:16.507568       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:16.561346       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:46.512707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:46.569768       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:16.518361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:16.578057       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:46.522907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:46.586595       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:16.528097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:16.594681       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:46.533180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:46.603112       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:21:16.538538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:21:16.610307       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:21:46.543519       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:21:46.618960       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:22:16.548790       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:22:16.627306       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:22:46.553302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:22:46.635087       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:16.558116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:16.641963       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:46.562729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:46.649503       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [73711de9fb93eec8c4588fd6c3c3d3bc4494b223a56e01759b33f0558db5c7bf] <==
	I0929 13:14:13.442419       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:14:13.509294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:14:13.609833       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:14:13.609907       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:14:13.610059       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:14:13.630147       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:14:13.630210       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:14:13.635749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:14:13.636263       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:14:13.636309       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:13.637605       1 config.go:309] "Starting node config controller"
	I0929 13:14:13.637628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:14:13.637724       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:14:13.637744       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:14:13.637838       1 config.go:200] "Starting service config controller"
	I0929 13:14:13.637857       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:14:13.637861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:14:13.637866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:14:13.738534       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:14:13.738547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:14:13.738579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:14:13.738666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [11ae39a5a4b2aa54de1a58fcc1500a804983a7f75c9d9041bfac4248aebd4626] <==
	I0929 13:14:10.414585       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:14:12.057382       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:14:12.057542       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:14:12.057559       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:14:12.057569       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:14:12.088946       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:14:12.089003       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:12.095383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:12.095425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:12.107335       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:14:12.107406       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:14:12.196246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:23:08 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:08.975076     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152188974785728  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:08 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:08.975116     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152188974785728  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:13.243541     708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:13.243609     708 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:13.243713     708 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-gmnqw_kubernetes-dashboard(a16fafc6-e94a-47ed-8838-4df0ecd6eb6c): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:13.243748     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: I0929 13:23:13.906693     708 scope.go:117] "RemoveContainer" containerID="b5bfffc7794d0b96c438bb7314847c5d91decd31cf1b56b273987899a8cdc34a"
	Sep 29 13:23:13 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:13.906919     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:23:18 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:18.908877     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:23:18 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:18.976682     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152198976413775  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:18 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:18.976725     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152198976413775  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:25 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:25.908381     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:23:27 default-k8s-diff-port-504443 kubelet[708]: I0929 13:23:27.906943     708 scope.go:117] "RemoveContainer" containerID="b5bfffc7794d0b96c438bb7314847c5d91decd31cf1b56b273987899a8cdc34a"
	Sep 29 13:23:27 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:27.907142     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:23:28 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:28.978336     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152208978030625  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:28 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:28.978381     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152208978030625  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:33 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:33.908728     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:23:38 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:38.979484     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152218979249522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:38 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:38.979522     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152218979249522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:40 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:40.909020     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:23:41 default-k8s-diff-port-504443 kubelet[708]: I0929 13:23:41.907120     708 scope.go:117] "RemoveContainer" containerID="b5bfffc7794d0b96c438bb7314847c5d91decd31cf1b56b273987899a8cdc34a"
	Sep 29 13:23:41 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:41.907375     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:23:45 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:45.907862     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:23:48 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:48.980803     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152228980562570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:23:48 default-k8s-diff-port-504443 kubelet[708]: E0929 13:23:48.980836     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152228980562570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	
	
	==> storage-provisioner [c9099c6e5307691f3116db853b92b66c3949faab2309ad5b82cb0af51459bb7a] <==
	I0929 13:14:13.373654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:14:43.376401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e932e508fe0aade1ac939aa0cbd00a696fb0e4e4be0f66e113009c58e45036c4] <==
	W0929 13:23:25.730356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:27.734402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:27.739198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:29.742349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:29.747951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:31.750719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:31.755317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:33.759151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:33.763319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:35.766737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:35.771539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:37.774761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:37.781231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:39.784959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:39.789374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:41.792940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:41.799493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:43.803546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:43.808452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:45.811507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:45.816750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:47.820022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:47.824301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:49.827786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:23:49.834077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw: exit status 1 (66.583578ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-l5t2q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gmnqw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gg4cr" [2a3f7370-a761-486c-993f-c0a0cc93ce6b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:29:28.347425071 +0000 UTC m=+3838.647461528
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-223488 describe po kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-223488 describe po kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-gg4cr
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-223488/192.168.94.2
Start Time:       Mon, 29 Sep 2025 13:10:58 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-79dc7 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-79dc7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr to old-k8s-version-223488
Warning  Failed     14m (x4 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     14m (x4 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     14m (x6 over 17m)     kubelet            Error: ImagePullBackOff
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal   BackOff    3m26s (x47 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard: exit status 1 (79.413034ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-gg4cr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-223488 logs kubernetes-dashboard-8694d4445c-gg4cr -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-223488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-223488
helpers_test.go:243: (dbg) docker inspect old-k8s-version-223488:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904",
	        "Created": "2025-09-29T13:09:18.577569114Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 813376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:35.282676032Z",
	            "FinishedAt": "2025-09-29T13:10:34.395923319Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/hosts",
	        "LogPath": "/var/lib/docker/containers/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904/3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904-json.log",
	        "Name": "/old-k8s-version-223488",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-223488:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-223488",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c4f9dce81a6b5826ef44b667dbd2b9b005bc87ffc4995840b0fce3b33810904",
	                "LowerDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a0548e5b1cc66484f44bb062497f0f5263d892f23c8fa632c7d52af7592ed91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-223488",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-223488/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-223488",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-223488",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-223488",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "831633c4715d6c4bb04097bcb43d90ab4f6a106af6efe72c1c46f36eb63bc030",
	            "SandboxKey": "/var/run/docker/netns/831633c4715d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-223488": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:19:9f:5d:a3:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0dd989a98e4be35ca09f4ad5f694ef2de11803caf0660ddd0b7a2a4c2c63ef6",
	                    "EndpointID": "17627954c891213a4a0f5121dd2871d4598ada8665af8f95f340b5597fe506d2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-223488",
	                        "3c4f9dce81a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223488 -n old-k8s-version-223488
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223488 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-223488 logs -n 25: (1.369580736s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-223488 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:14:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:14:01.801416  839515 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:14:01.801548  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801557  839515 out.go:374] Setting ErrFile to fd 2...
	I0929 13:14:01.801561  839515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:14:01.801790  839515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:14:01.802369  839515 out.go:368] Setting JSON to false
	I0929 13:14:01.803835  839515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10587,"bootTime":1759141055,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:14:01.803980  839515 start.go:140] virtualization: kvm guest
	I0929 13:14:01.806446  839515 out.go:179] * [default-k8s-diff-port-504443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:14:01.808471  839515 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:14:01.808488  839515 notify.go:220] Checking for updates...
	I0929 13:14:01.811422  839515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:14:01.813137  839515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:01.815358  839515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:14:01.817089  839515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:14:01.818747  839515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:14:01.820859  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:01.821367  839515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:14:01.850294  839515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:14:01.850496  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:01.920086  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.906779425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:01.920249  839515 docker.go:318] overlay module found
	I0929 13:14:01.923199  839515 out.go:179] * Using the docker driver based on existing profile
	I0929 13:14:01.924580  839515 start.go:304] selected driver: docker
	I0929 13:14:01.924604  839515 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:01.924742  839515 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:14:01.925594  839515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:14:02.004135  839515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 13:14:01.989084501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:14:02.004575  839515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:02.004635  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:02.004699  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:02.004749  839515 start.go:348] cluster config:
	{Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:02.006556  839515 out.go:179] * Starting "default-k8s-diff-port-504443" primary control-plane node in "default-k8s-diff-port-504443" cluster
	I0929 13:14:02.007837  839515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:14:02.009404  839515 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:14:02.011260  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:02.011353  839515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:14:02.011371  839515 cache.go:58] Caching tarball of preloaded images
	I0929 13:14:02.011418  839515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:14:02.011589  839515 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:14:02.011606  839515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:14:02.011761  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.040696  839515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:14:02.040723  839515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:14:02.040747  839515 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:14:02.040778  839515 start.go:360] acquireMachinesLock for default-k8s-diff-port-504443: {Name:mkd1504d0afcb57e7e3a7d375c0d3d045f6ff0f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:14:02.040840  839515 start.go:364] duration metric: took 41.435µs to acquireMachinesLock for "default-k8s-diff-port-504443"
	I0929 13:14:02.040859  839515 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:14:02.040866  839515 fix.go:54] fixHost starting: 
	I0929 13:14:02.041151  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.065452  839515 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504443: state=Stopped err=<nil>
	W0929 13:14:02.065493  839515 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:14:00.890602  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:00.890614  837560 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:00.890670  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.892229  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:00.892253  837560 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:00.892339  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.932762  837560 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:00.932828  837560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:00.932989  837560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-144376
	I0929 13:14:00.934137  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.945316  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.948654  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:00.961271  837560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/embed-certs-144376/id_rsa Username:docker}
	I0929 13:14:01.034193  837560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:01.056199  837560 node_ready.go:35] waiting up to 6m0s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:01.062352  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:01.074784  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:01.074816  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:01.080006  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:01.080035  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:01.096572  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:01.107273  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:01.107304  837560 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:01.123628  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:01.123736  837560 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:01.159235  837560 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.159267  837560 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:01.162841  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:01.163496  837560 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:01.197386  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:01.198337  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:01.198359  837560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:01.226863  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:01.226900  837560 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:01.252970  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:01.252998  837560 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:01.278501  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:01.278527  837560 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:01.303325  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:01.303366  837560 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:01.329503  837560 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:01.329532  837560 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:01.353791  837560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:03.007947  837560 node_ready.go:49] node "embed-certs-144376" is "Ready"
	I0929 13:14:03.007988  837560 node_ready.go:38] duration metric: took 1.951746003s for node "embed-certs-144376" to be "Ready" ...
	I0929 13:14:03.008006  837560 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:03.008068  837560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:03.686627  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.624233175s)
	I0929 13:14:03.686706  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.590098715s)
	I0929 13:14:03.686993  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.489568477s)
	I0929 13:14:03.687027  837560 addons.go:479] Verifying addon metrics-server=true in "embed-certs-144376"
	I0929 13:14:03.687147  837560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.333304219s)
	I0929 13:14:03.687396  837560 api_server.go:72] duration metric: took 2.840723243s to wait for apiserver process to appear ...
	I0929 13:14:03.687413  837560 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:03.687434  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:03.689946  837560 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-144376 addons enable metrics-server
	
	I0929 13:14:03.693918  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:03.693955  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:03.703949  837560 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:14:02.067503  839515 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-504443" ...
	I0929 13:14:02.067595  839515 cli_runner.go:164] Run: docker start default-k8s-diff-port-504443
	I0929 13:14:02.400205  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:02.426021  839515 kic.go:430] container "default-k8s-diff-port-504443" state is running.
	I0929 13:14:02.426697  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:02.452245  839515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/config.json ...
	I0929 13:14:02.452576  839515 machine.go:93] provisionDockerMachine start ...
	I0929 13:14:02.452686  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:02.476313  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:02.476569  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:02.476592  839515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:14:02.477420  839515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45360->127.0.0.1:33463: read: connection reset by peer
	I0929 13:14:05.620847  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.620906  839515 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-504443"
	I0929 13:14:05.621012  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.641909  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.642258  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.642275  839515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504443 && echo "default-k8s-diff-port-504443" | sudo tee /etc/hostname
	I0929 13:14:05.804833  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504443
	
	I0929 13:14:05.804936  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:05.826632  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:05.826863  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:05.826904  839515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504443/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:14:05.968467  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:14:05.968502  839515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:14:05.968535  839515 ubuntu.go:190] setting up certificates
	I0929 13:14:05.968548  839515 provision.go:84] configureAuth start
	I0929 13:14:05.968610  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:05.988690  839515 provision.go:143] copyHostCerts
	I0929 13:14:05.988763  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:14:05.988788  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:14:05.988904  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:14:05.989039  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:14:05.989049  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:14:05.989082  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:14:05.989162  839515 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:14:05.989170  839515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:14:05.989196  839515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:14:05.989339  839515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504443 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-504443 localhost minikube]
	I0929 13:14:06.185911  839515 provision.go:177] copyRemoteCerts
	I0929 13:14:06.185989  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:14:06.186098  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.205790  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:06.309505  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:14:06.340444  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:14:06.372277  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:14:06.402506  839515 provision.go:87] duration metric: took 433.943194ms to configureAuth
	I0929 13:14:06.402539  839515 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:14:06.402765  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:06.402931  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.424941  839515 main.go:141] libmachine: Using SSH client type: native
	I0929 13:14:06.425216  839515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I0929 13:14:06.425243  839515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:14:06.741449  839515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:14:06.741480  839515 machine.go:96] duration metric: took 4.288878167s to provisionDockerMachine
	I0929 13:14:06.741495  839515 start.go:293] postStartSetup for "default-k8s-diff-port-504443" (driver="docker")
	I0929 13:14:06.741509  839515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:14:06.741575  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:14:06.741626  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.764273  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:03.706436  837560 addons.go:514] duration metric: took 2.859616556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:04.188145  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.194079  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.194114  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:04.687754  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:04.692514  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:04.692547  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.188198  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.193003  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:05.193033  837560 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:05.687682  837560 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:14:05.692821  837560 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:14:05.694070  837560 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:05.694103  837560 api_server.go:131] duration metric: took 2.006683698s to wait for apiserver health ...
	I0929 13:14:05.694113  837560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:05.699584  837560 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:05.699638  837560 system_pods.go:61] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.699655  837560 system_pods.go:61] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.699667  837560 system_pods.go:61] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.699676  837560 system_pods.go:61] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.699687  837560 system_pods.go:61] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.699697  837560 system_pods.go:61] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.699711  837560 system_pods.go:61] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.699721  837560 system_pods.go:61] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.699734  837560 system_pods.go:61] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.699743  837560 system_pods.go:74] duration metric: took 5.622791ms to wait for pod list to return data ...
	I0929 13:14:05.699757  837560 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:05.703100  837560 default_sa.go:45] found service account: "default"
	I0929 13:14:05.703127  837560 default_sa.go:55] duration metric: took 3.363521ms for default service account to be created ...
	I0929 13:14:05.703137  837560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:05.712514  837560 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:05.712559  837560 system_pods.go:89] "coredns-66bc5c9577-vrkvb" [52cfb83d-e7b5-42b8-aa1c-750631db6ddb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:05.712571  837560 system_pods.go:89] "etcd-embed-certs-144376" [af98c90d-53ed-47f8-b18f-873b8d3f522d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:05.712579  837560 system_pods.go:89] "kindnet-cs6jd" [d90447d3-3dbf-4d6c-869a-332bc3bc74a2] Running
	I0929 13:14:05.712592  837560 system_pods.go:89] "kube-apiserver-embed-certs-144376" [0ab628fb-412a-4b26-bb99-6f872e8fa001] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:05.712601  837560 system_pods.go:89] "kube-controller-manager-embed-certs-144376" [859d8e0d-c611-409c-bd76-669c81d14332] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:05.712614  837560 system_pods.go:89] "kube-proxy-bdkrl" [5df1491d-306f-4c90-b4be-c72c40332a53] Running
	I0929 13:14:05.712629  837560 system_pods.go:89] "kube-scheduler-embed-certs-144376" [25ad758b-318e-43d4-8c61-ef94784ff36f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:05.712643  837560 system_pods.go:89] "metrics-server-746fcd58dc-8wkwn" [d0a89b58-3205-44cb-af7d-6e7a36bf99bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:05.712648  837560 system_pods.go:89] "storage-provisioner" [3c9d9a61-e3d2-4030-a441-d6976c967933] Running
	I0929 13:14:05.712659  837560 system_pods.go:126] duration metric: took 9.514361ms to wait for k8s-apps to be running ...
	I0929 13:14:05.712669  837560 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:05.712730  837560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:05.733971  837560 system_svc.go:56] duration metric: took 21.287495ms WaitForService to wait for kubelet
	I0929 13:14:05.734004  837560 kubeadm.go:578] duration metric: took 4.887332987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:05.734047  837560 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:05.737599  837560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:05.737632  837560 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:05.737645  837560 node_conditions.go:105] duration metric: took 3.59217ms to run NodePressure ...
	I0929 13:14:05.737660  837560 start.go:241] waiting for startup goroutines ...
	I0929 13:14:05.737667  837560 start.go:246] waiting for cluster config update ...
	I0929 13:14:05.737679  837560 start.go:255] writing updated cluster config ...
	I0929 13:14:05.738043  837560 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:05.743175  837560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:05.747563  837560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:07.753718  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:06.865904  839515 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:14:06.869732  839515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:14:06.869776  839515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:14:06.869789  839515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:14:06.869797  839515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:14:06.869820  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:14:06.869914  839515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:14:06.870040  839515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:14:06.870152  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:14:06.881041  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:06.910664  839515 start.go:296] duration metric: took 169.149248ms for postStartSetup
	I0929 13:14:06.910763  839515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:14:06.910806  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:06.930467  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.026128  839515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:14:07.031766  839515 fix.go:56] duration metric: took 4.990890676s for fixHost
	I0929 13:14:07.031793  839515 start.go:83] releasing machines lock for "default-k8s-diff-port-504443", held for 4.990942592s
	I0929 13:14:07.031878  839515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-504443
	I0929 13:14:07.050982  839515 ssh_runner.go:195] Run: cat /version.json
	I0929 13:14:07.051039  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.051090  839515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:14:07.051158  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:07.072609  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.072906  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:07.245633  839515 ssh_runner.go:195] Run: systemctl --version
	I0929 13:14:07.251713  839515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:14:07.405376  839515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:14:07.412347  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.424730  839515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:14:07.424820  839515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:14:07.436822  839515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:14:07.436852  839515 start.go:495] detecting cgroup driver to use...
	I0929 13:14:07.436922  839515 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:14:07.437079  839515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:14:07.451837  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:14:07.466730  839515 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:14:07.466785  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:14:07.482295  839515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:14:07.497182  839515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:14:07.573510  839515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:14:07.647720  839515 docker.go:234] disabling docker service ...
	I0929 13:14:07.647793  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:14:07.663956  839515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:14:07.678340  839515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:14:07.749850  839515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:14:07.833138  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:14:07.847332  839515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:14:07.869460  839515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:14:07.869534  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.882223  839515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:14:07.882304  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.895125  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.908850  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.925290  839515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:14:07.942174  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.956313  839515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.970510  839515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:14:07.984185  839515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:14:07.995199  839515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:14:08.006273  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.079146  839515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:14:08.201036  839515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:14:08.201135  839515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:14:08.205983  839515 start.go:563] Will wait 60s for crictl version
	I0929 13:14:08.206058  839515 ssh_runner.go:195] Run: which crictl
	I0929 13:14:08.210186  839515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:14:08.251430  839515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:14:08.251529  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.296851  839515 ssh_runner.go:195] Run: crio --version
	I0929 13:14:08.339448  839515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:14:08.341414  839515 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-504443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:14:08.362344  839515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:14:08.367546  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.381721  839515 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:14:08.381862  839515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:14:08.381951  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.433062  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.433096  839515 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:14:08.433161  839515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:14:08.473938  839515 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:14:08.473972  839515 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:14:08.473983  839515 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0929 13:14:08.474084  839515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-504443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:14:08.474149  839515 ssh_runner.go:195] Run: crio config
	I0929 13:14:08.535858  839515 cni.go:84] Creating CNI manager for ""
	I0929 13:14:08.535928  839515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:14:08.535954  839515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:14:08.535987  839515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504443 NodeName:default-k8s-diff-port-504443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:14:08.536149  839515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504443"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:14:08.536221  839515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:14:08.549875  839515 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:14:08.549968  839515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:14:08.562591  839515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 13:14:08.588448  839515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:14:08.613818  839515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 13:14:08.637842  839515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:14:08.642571  839515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:14:08.658613  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:08.742685  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:08.769381  839515 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443 for IP: 192.168.76.2
	I0929 13:14:08.769408  839515 certs.go:194] generating shared ca certs ...
	I0929 13:14:08.769432  839515 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:08.769610  839515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:14:08.769690  839515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:14:08.769707  839515 certs.go:256] generating profile certs ...
	I0929 13:14:08.769830  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.key
	I0929 13:14:08.769913  839515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key.3fc9c8d4
	I0929 13:14:08.769963  839515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key
	I0929 13:14:08.770120  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:14:08.770170  839515 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:14:08.770186  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:14:08.770222  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:14:08.770264  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:14:08.770297  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:14:08.770375  839515 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:14:08.771164  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:14:08.810187  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:14:08.852550  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:14:08.909671  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:14:08.944558  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:14:08.979658  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:14:09.015199  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:14:09.050930  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:14:09.086524  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:14:09.119207  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:14:09.151483  839515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:14:09.186734  839515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:14:09.211662  839515 ssh_runner.go:195] Run: openssl version
	I0929 13:14:09.219872  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:14:09.232974  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237506  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.237581  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:14:09.247699  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:14:09.262697  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:14:09.277818  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283413  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.283551  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:14:09.293753  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:14:09.307826  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:14:09.322785  839515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328680  839515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.328758  839515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:14:09.337578  839515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:14:09.349565  839515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:14:09.355212  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:14:09.365031  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:14:09.376499  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:14:09.386571  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:14:09.396193  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:14:09.405722  839515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:14:09.416490  839515 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-504443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:14:09.416619  839515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:14:09.416692  839515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:14:09.480165  839515 cri.go:89] found id: ""
	I0929 13:14:09.480329  839515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:14:09.502356  839515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:14:09.502385  839515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:14:09.502465  839515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:14:09.516584  839515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:14:09.517974  839515 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-504443" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.518950  839515 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-564029/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-504443" cluster setting kubeconfig missing "default-k8s-diff-port-504443" context setting]
	I0929 13:14:09.520381  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.523350  839515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:14:09.540146  839515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:14:09.540271  839515 kubeadm.go:593] duration metric: took 37.87462ms to restartPrimaryControlPlane
	I0929 13:14:09.540292  839515 kubeadm.go:394] duration metric: took 123.821391ms to StartCluster
	I0929 13:14:09.540318  839515 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.540461  839515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:14:09.543243  839515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:14:09.543701  839515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:14:09.543964  839515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:14:09.544056  839515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544105  839515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544134  839515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504443"
	I0929 13:14:09.544215  839515 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:14:09.544297  839515 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544313  839515 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544323  839515 addons.go:247] addon dashboard should already be in state true
	I0929 13:14:09.544356  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544499  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.544580  839515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504443"
	I0929 13:14:09.544601  839515 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544610  839515 addons.go:247] addon metrics-server should already be in state true
	I0929 13:14:09.544638  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544779  839515 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.544826  839515 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:14:09.544867  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.544923  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545131  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.545706  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.546905  839515 out.go:179] * Verifying Kubernetes components...
	I0929 13:14:09.548849  839515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:14:09.588222  839515 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-504443"
	W0929 13:14:09.588254  839515 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:14:09.588394  839515 host.go:66] Checking if "default-k8s-diff-port-504443" exists ...
	I0929 13:14:09.589235  839515 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-504443 --format={{.State.Status}}
	I0929 13:14:09.591356  839515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:14:09.592899  839515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.592920  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:14:09.592997  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.599097  839515 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:14:09.603537  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:14:09.603567  839515 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:14:09.603641  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.623364  839515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:14:09.625378  839515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:14:09.626964  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:14:09.626991  839515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:14:09.627087  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.646947  839515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.647072  839515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:14:09.647170  839515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-504443
	I0929 13:14:09.657171  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.660429  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.682698  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.694425  839515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/default-k8s-diff-port-504443/id_rsa Username:docker}
	I0929 13:14:09.758623  839515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:14:09.782535  839515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:09.796122  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:14:09.824319  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:14:09.824349  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:14:09.831248  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:14:09.831269  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:14:09.857539  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:14:09.865401  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:14:09.865601  839515 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:14:09.868433  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:14:09.868454  839515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:14:09.911818  839515 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.911849  839515 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:14:09.919662  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:14:09.919693  839515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:14:09.945916  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:14:09.956819  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:14:09.956847  839515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:14:09.983049  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:14:09.983088  839515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:14:10.008150  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:14:10.008187  839515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:14:10.035225  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:14:10.035255  839515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:14:10.063000  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:14:10.063033  839515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:14:10.088151  839515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:10.088182  839515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:14:10.111599  839515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:14:12.055468  839515 node_ready.go:49] node "default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:12.055507  839515 node_ready.go:38] duration metric: took 2.272916493s for node "default-k8s-diff-port-504443" to be "Ready" ...
	I0929 13:14:12.055524  839515 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:14:12.055588  839515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:14:12.693113  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.896952632s)
	I0929 13:14:12.693205  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.835545565s)
	I0929 13:14:12.693264  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.747320981s)
	I0929 13:14:12.693289  839515 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-504443"
	I0929 13:14:12.693401  839515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.581752595s)
	I0929 13:14:12.693437  839515 api_server.go:72] duration metric: took 3.149694543s to wait for apiserver process to appear ...
	I0929 13:14:12.693448  839515 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:14:12.693465  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:12.695374  839515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-504443 addons enable metrics-server
	
	I0929 13:14:12.698283  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:12.698311  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:12.701668  839515 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0929 13:14:09.762777  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:12.254708  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:12.703272  839515 addons.go:514] duration metric: took 3.159290714s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:14:13.194062  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.199962  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.200005  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:13.693647  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:13.699173  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:14:13.699207  839515 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:14:14.193661  839515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:14:14.198386  839515 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:14:14.199540  839515 api_server.go:141] control plane version: v1.34.0
	I0929 13:14:14.199566  839515 api_server.go:131] duration metric: took 1.506111317s to wait for apiserver health ...
	I0929 13:14:14.199576  839515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:14:14.203404  839515 system_pods.go:59] 9 kube-system pods found
	I0929 13:14:14.203444  839515 system_pods.go:61] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.203452  839515 system_pods.go:61] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.203458  839515 system_pods.go:61] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.203465  839515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.203471  839515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.203482  839515 system_pods.go:61] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.203495  839515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.203503  839515 system_pods.go:61] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.203512  839515 system_pods.go:61] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.203520  839515 system_pods.go:74] duration metric: took 3.93835ms to wait for pod list to return data ...
	I0929 13:14:14.203531  839515 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:14:14.206279  839515 default_sa.go:45] found service account: "default"
	I0929 13:14:14.206304  839515 default_sa.go:55] duration metric: took 2.763244ms for default service account to be created ...
	I0929 13:14:14.206315  839515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:14:14.209977  839515 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:14.210027  839515 system_pods.go:89] "coredns-66bc5c9577-prpff" [406acfa0-0ee4-4e5d-9973-c6c9d8274e12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:14.210040  839515 system_pods.go:89] "etcd-default-k8s-diff-port-504443" [c9bfb34f-a52c-4b61-88ad-af8e0efe6856] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:14:14.210048  839515 system_pods.go:89] "kindnet-fb5jq" [8ced4713-9348-4e0d-8081-883c8ce45742] Running
	I0929 13:14:14.210057  839515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504443" [1d894cf9-e1e9-4147-8c26-5a3f5801b3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:14:14.210066  839515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504443" [fa48e960-9c46-48fa-9ee6-703b4a680474] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:14:14.210073  839515 system_pods.go:89] "kube-proxy-vcsfr" [615a9551-ae4b-47cd-a21b-19656c69390c] Running
	I0929 13:14:14.210082  839515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504443" [f5488057-2005-4d5c-abfd-be69b55d4699] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:14:14.210089  839515 system_pods.go:89] "metrics-server-746fcd58dc-l5t2q" [618425bc-036b-42f0-9fdf-4e7744bdd84d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:14:14.210121  839515 system_pods.go:89] "storage-provisioner" [df51460b-ca6e-41c5-8a7f-4eabf4dc5598] Running
	I0929 13:14:14.210130  839515 system_pods.go:126] duration metric: took 3.808134ms to wait for k8s-apps to be running ...
	I0929 13:14:14.210140  839515 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:14:14.210201  839515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:14:14.225164  839515 system_svc.go:56] duration metric: took 15.009784ms WaitForService to wait for kubelet
	I0929 13:14:14.225205  839515 kubeadm.go:578] duration metric: took 4.681459973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:14:14.225249  839515 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:14:14.228249  839515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:14:14.228290  839515 node_conditions.go:123] node cpu capacity is 8
	I0929 13:14:14.228307  839515 node_conditions.go:105] duration metric: took 3.048343ms to run NodePressure ...
	I0929 13:14:14.228326  839515 start.go:241] waiting for startup goroutines ...
	I0929 13:14:14.228336  839515 start.go:246] waiting for cluster config update ...
	I0929 13:14:14.228350  839515 start.go:255] writing updated cluster config ...
	I0929 13:14:14.228612  839515 ssh_runner.go:195] Run: rm -f paused
	I0929 13:14:14.233754  839515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:14.238169  839515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:14:16.244346  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:14.257696  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:16.754720  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:18.244963  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:20.245434  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:19.254143  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:21.754181  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:22.245771  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:24.743982  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:26.745001  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:23.755533  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:26.254152  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:29.244352  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:31.244535  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:28.753653  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:30.754009  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:33.744429  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:35.745000  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:33.254079  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	W0929 13:14:35.753251  837560 pod_ready.go:104] pod "coredns-66bc5c9577-vrkvb" is not "Ready", error: <nil>
	I0929 13:14:37.754125  837560 pod_ready.go:94] pod "coredns-66bc5c9577-vrkvb" is "Ready"
	I0929 13:14:37.754153  837560 pod_ready.go:86] duration metric: took 32.006559006s for pod "coredns-66bc5c9577-vrkvb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.757295  837560 pod_ready.go:83] waiting for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.762511  837560 pod_ready.go:94] pod "etcd-embed-certs-144376" is "Ready"
	I0929 13:14:37.762543  837560 pod_ready.go:86] duration metric: took 5.214008ms for pod "etcd-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.765205  837560 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.769732  837560 pod_ready.go:94] pod "kube-apiserver-embed-certs-144376" is "Ready"
	I0929 13:14:37.769763  837560 pod_ready.go:86] duration metric: took 4.5304ms for pod "kube-apiserver-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.772045  837560 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:37.952582  837560 pod_ready.go:94] pod "kube-controller-manager-embed-certs-144376" is "Ready"
	I0929 13:14:37.952613  837560 pod_ready.go:86] duration metric: took 180.54484ms for pod "kube-controller-manager-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.152075  837560 pod_ready.go:83] waiting for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.552510  837560 pod_ready.go:94] pod "kube-proxy-bdkrl" is "Ready"
	I0929 13:14:38.552543  837560 pod_ready.go:86] duration metric: took 400.438224ms for pod "kube-proxy-bdkrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:38.751930  837560 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152918  837560 pod_ready.go:94] pod "kube-scheduler-embed-certs-144376" is "Ready"
	I0929 13:14:39.152978  837560 pod_ready.go:86] duration metric: took 401.010043ms for pod "kube-scheduler-embed-certs-144376" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:39.152998  837560 pod_ready.go:40] duration metric: took 33.409779031s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:39.200854  837560 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:39.202814  837560 out.go:179] * Done! kubectl is now configured to use "embed-certs-144376" cluster and "default" namespace by default
	W0929 13:14:38.244646  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:40.745094  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:43.243922  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	W0929 13:14:45.744130  839515 pod_ready.go:104] pod "coredns-66bc5c9577-prpff" is not "Ready", error: <nil>
	I0929 13:14:46.743671  839515 pod_ready.go:94] pod "coredns-66bc5c9577-prpff" is "Ready"
	I0929 13:14:46.743700  839515 pod_ready.go:86] duration metric: took 32.505501945s for pod "coredns-66bc5c9577-prpff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.746421  839515 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.752034  839515 pod_ready.go:94] pod "etcd-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.752061  839515 pod_ready.go:86] duration metric: took 5.610516ms for pod "etcd-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.754137  839515 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.758705  839515 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.758739  839515 pod_ready.go:86] duration metric: took 4.576444ms for pod "kube-apiserver-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.761180  839515 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:46.941521  839515 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:46.941552  839515 pod_ready.go:86] duration metric: took 180.339824ms for pod "kube-controller-manager-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.141974  839515 pod_ready.go:83] waiting for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.541782  839515 pod_ready.go:94] pod "kube-proxy-vcsfr" is "Ready"
	I0929 13:14:47.541812  839515 pod_ready.go:86] duration metric: took 399.809326ms for pod "kube-proxy-vcsfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:47.742034  839515 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142534  839515 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-504443" is "Ready"
	I0929 13:14:48.142565  839515 pod_ready.go:86] duration metric: took 400.492621ms for pod "kube-scheduler-default-k8s-diff-port-504443" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:14:48.142578  839515 pod_ready.go:40] duration metric: took 33.908786928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:14:48.192681  839515 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:14:48.194961  839515 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-504443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 13:28:10 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:10.140818600Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=980359cf-5dab-4d3f-8555-a8e48b9f7367 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:14 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:14.140316003Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9ec58f4a-f3bf-4e06-8d28-1d70284a8c22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:14 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:14.140613275Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9ec58f4a-f3bf-4e06-8d28-1d70284a8c22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:24 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:24.139971517Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=80a96796-7288-4826-b5f0-c13b2e52320c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:24 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:24.140345431Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=80a96796-7288-4826-b5f0-c13b2e52320c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:29 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:29.139914650Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=122c3258-c7d1-4ee6-96aa-ecf16f6acc33 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:29 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:29.140285163Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=122c3258-c7d1-4ee6-96aa-ecf16f6acc33 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:36 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:36.140045032Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b36e8850-b112-4413-b44b-02ec5f800b62 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:36 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:36.140418476Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b36e8850-b112-4413-b44b-02ec5f800b62 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:40 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:40.140103821Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e5088180-ab15-4cf5-a1ec-e8a99996b47b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:40 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:40.140440140Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e5088180-ab15-4cf5-a1ec-e8a99996b47b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:49 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:49.140398472Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f5bc0b75-2ae1-4b4b-b00b-75e3fb7021b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:49 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:49.140674532Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f5bc0b75-2ae1-4b4b-b00b-75e3fb7021b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:54 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:54.140618702Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b800838b-b9f1-42fd-8539-c3887fbb0a41 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:54 old-k8s-version-223488 crio[563]: time="2025-09-29 13:28:54.140946024Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b800838b-b9f1-42fd-8539-c3887fbb0a41 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:00 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:00.140712150Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8141f48a-2fc7-465e-a66d-74431e2e8a4f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:00 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:00.141057176Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8141f48a-2fc7-465e-a66d-74431e2e8a4f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:09 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:09.139854112Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e2fd80ce-88b4-4e5c-b581-7d0a422d9ab7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:09 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:09.140230612Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e2fd80ce-88b4-4e5c-b581-7d0a422d9ab7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:15 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:15.140245666Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6965c2d9-69fa-4491-ab8e-194345e3a4c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:15 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:15.140507717Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6965c2d9-69fa-4491-ab8e-194345e3a4c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:23 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:23.140931236Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=23eddbd7-7b3e-4f05-a0a6-8100849c29b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:23 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:23.141284858Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=23eddbd7-7b3e-4f05-a0a6-8100849c29b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:28 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:28.140761029Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=282f6a62-efe5-42d0-8f52-f1b499f1e013 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:28 old-k8s-version-223488 crio[563]: time="2025-09-29 13:29:28.141171691Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=282f6a62-efe5-42d0-8f52-f1b499f1e013 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0833985586d47       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   9fc695ba94981       dashboard-metrics-scraper-5f989dc9cf-sm4lt
	f1080a53e734e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   2c08df618ae22       storage-provisioner
	f7374d71ac076       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                     1                   523b630c4c13e       coredns-5dd5756b68-w7p64
	48a60cedea0d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   ac9a2dac72f9b       kindnet-gkh8l
	f6464328e5ed7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   b139267d22cdd       busybox
	1980694c9b731       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   18 minutes ago      Running             kube-proxy                  1                   3d3e7a8c7ffaa       kube-proxy-fmnl8
	6350254ce867f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   2c08df618ae22       storage-provisioner
	b0fcfda364a2d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   18 minutes ago      Running             kube-scheduler              1                   d9471f2448ce1       kube-scheduler-old-k8s-version-223488
	d2acbb48a2ad1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   18 minutes ago      Running             kube-apiserver              1                   46df369160c5a       kube-apiserver-old-k8s-version-223488
	b89ec95aa6412       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   18 minutes ago      Running             kube-controller-manager     1                   ad68a2a621148       kube-controller-manager-old-k8s-version-223488
	e1bbb3fe053d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                        1                   a2c80dc375458       etcd-old-k8s-version-223488
	
	
	==> coredns [f7374d71ac076a422f15d1fc4ac423e11d8d7d2f4314badc06d726747cad9a7f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39068 - 31847 "HINFO IN 3740510856147808050.6485710210283806308. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119457697s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-223488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-223488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-223488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_09_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:09:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-223488
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:29:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:26:34 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:26:34 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:26:34 +0000   Mon, 29 Sep 2025 13:09:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:26:34 +0000   Mon, 29 Sep 2025 13:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-223488
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d95519c32a0a4fb19ce38cab34beaac2
	  System UUID:                41eac839-6b1b-4b6d-a6a7-9ab802ae2f2e
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-w7p64                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-223488                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-gkh8l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-223488             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-223488    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-fmnl8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-223488             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-cmxv5                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-sm4lt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gg4cr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-223488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-223488 event: Registered Node old-k8s-version-223488 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-223488 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-223488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-223488 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-223488 event: Registered Node old-k8s-version-223488 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [e1bbb3fe053d4f6b4672b4f29700db930fe370ee31d7bbd99763468fba15c2de] <==
	{"level":"info","ts":"2025-09-29T13:10:43.038599Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T13:10:43.03866Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T13:10:43.038671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T13:10:44.317029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-09-29T13:10:44.317107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.317112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.317142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.31715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T13:10:44.318827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-223488 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:10:44.318827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:10:44.318854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:10:44.319143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:10:44.319208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:10:44.320134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:10:44.320185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-09-29T13:12:17.248134Z","caller":"traceutil/trace.go:171","msg":"trace[739949258] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"119.124427ms","start":"2025-09-29T13:12:17.128988Z","end":"2025-09-29T13:12:17.248113Z","steps":["trace[739949258] 'process raft request'  (duration: 118.985322ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:57.297407Z","caller":"traceutil/trace.go:171","msg":"trace[513510388] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"133.00081ms","start":"2025-09-29T13:12:57.164383Z","end":"2025-09-29T13:12:57.297384Z","steps":["trace[513510388] 'process raft request'  (duration: 132.872634ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:20:44.336191Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2025-09-29T13:20:44.338187Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":924,"took":"1.688416ms","hash":1991416014}
	{"level":"info","ts":"2025-09-29T13:20:44.338244Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1991416014,"revision":924,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:25:44.340823Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2025-09-29T13:25:44.342114Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1175,"took":"931.684µs","hash":1306096426}
	{"level":"info","ts":"2025-09-29T13:25:44.342155Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1306096426,"revision":1175,"compact-revision":924}
	
	
	==> kernel <==
	 13:29:29 up  3:11,  0 users,  load average: 0.55, 0.58, 1.14
	Linux old-k8s-version-223488 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [48a60cedea0d6dfd8c26c7fd40cd1a47fd53d4c52182ef59bc3979173acb1ce5] <==
	I0929 13:27:27.123027       1 main.go:301] handling current node
	I0929 13:27:37.124004       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:37.124054       1 main.go:301] handling current node
	I0929 13:27:47.118037       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:47.118087       1 main.go:301] handling current node
	I0929 13:27:57.121415       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:57.121451       1 main.go:301] handling current node
	I0929 13:28:07.119965       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:07.120022       1 main.go:301] handling current node
	I0929 13:28:17.117756       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:17.117802       1 main.go:301] handling current node
	I0929 13:28:27.121815       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:27.121857       1 main.go:301] handling current node
	I0929 13:28:37.119969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:37.120016       1 main.go:301] handling current node
	I0929 13:28:47.117487       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:47.117553       1 main.go:301] handling current node
	I0929 13:28:57.124983       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:57.125022       1 main.go:301] handling current node
	I0929 13:29:07.120450       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:07.120496       1 main.go:301] handling current node
	I0929 13:29:17.118108       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:17.118143       1 main.go:301] handling current node
	I0929 13:29:27.126092       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:27.126138       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2acbb48a2ad1b4f139989bbd165ed93cf360d3f6a8d47fbf90f6b4a2c7fbd8b] <==
	E0929 13:27:15.416929       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:27:25.418134       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:27:35.418965       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I0929 13:27:45.242103       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.162.8:443: connect: connection refused
	I0929 13:27:45.242125       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 13:27:45.419631       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:27:55.420654       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:28:05.420994       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:28:15.421458       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:28:25.422175       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:28:35.423017       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	I0929 13:28:45.242528       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.162.8:443: connect: connection refused
	I0929 13:28:45.242555       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 13:28:45.424013       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	W0929 13:28:46.320738       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:28:46.320775       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:28:46.320783       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:28:46.320839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:28:46.320925       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:28:46.321874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 13:28:55.424860       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:29:05.425316       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:29:15.426462       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 13:29:25.427648       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b89ec95aa641221a0461d0e0054bb6c82a40de4a33a7c7065c53c2891f6e4f18] <==
	I0929 13:25:36.150975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="199.073µs"
	I0929 13:25:51.150136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.03µs"
	E0929 13:25:58.574499       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:25:59.083471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:26:28.578697       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:26:29.090366       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:26:45.267116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.377045ms"
	I0929 13:26:45.267301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.988µs"
	I0929 13:26:46.272613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="8.313235ms"
	I0929 13:26:46.272740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.482µs"
	I0929 13:26:48.842479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.461µs"
	E0929 13:26:58.584078       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:26:59.098384       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:27:28.588749       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:27:29.105438       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:27:51.152409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.14µs"
	E0929 13:27:58.594117       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:27:59.113335       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:28:03.150383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="105.54µs"
	E0929 13:28:28.598307       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:28:29.121082       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:28:58.603302       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:28:59.130159       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:29:28.608476       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:29:29.138799       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1980694c9b7313a14cd5c4651f5cb23afa10cecec355a61371114306fbc630ef] <==
	I0929 13:10:46.717842       1 server_others.go:69] "Using iptables proxy"
	I0929 13:10:46.727581       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0929 13:10:46.748040       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:10:46.750659       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:10:46.750695       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:10:46.750701       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:10:46.750733       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:10:46.751014       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:10:46.751034       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:46.751704       1 config.go:188] "Starting service config controller"
	I0929 13:10:46.751734       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:10:46.751734       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:10:46.751752       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:10:46.751803       1 config.go:315] "Starting node config controller"
	I0929 13:10:46.751815       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:10:46.852269       1 shared_informer.go:318] Caches are synced for service config
	I0929 13:10:46.852404       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:10:46.852413       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b0fcfda364a2df1cfab036555acee98a844fcc156eaa9ff263e3f93d0ed32525] <==
	I0929 13:10:43.364709       1 serving.go:348] Generated self-signed cert in-memory
	W0929 13:10:45.284550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:10:45.284588       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:10:45.284605       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:10:45.284617       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:10:45.307155       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 13:10:45.307258       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:45.309928       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:45.309976       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 13:10:45.310654       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 13:10:45.310683       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 13:10:45.411131       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 13:28:10 old-k8s-version-223488 kubelet[712]: E0929 13:28:10.141135     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:28:14 old-k8s-version-223488 kubelet[712]: E0929 13:28:14.140859     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:28:21 old-k8s-version-223488 kubelet[712]: I0929 13:28:21.139999     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:28:21 old-k8s-version-223488 kubelet[712]: E0929 13:28:21.140302     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:28:24 old-k8s-version-223488 kubelet[712]: E0929 13:28:24.140690     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:28:29 old-k8s-version-223488 kubelet[712]: E0929 13:28:29.140652     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:28:32 old-k8s-version-223488 kubelet[712]: I0929 13:28:32.140534     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:28:32 old-k8s-version-223488 kubelet[712]: E0929 13:28:32.141034     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:28:36 old-k8s-version-223488 kubelet[712]: E0929 13:28:36.140709     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:28:40 old-k8s-version-223488 kubelet[712]: E0929 13:28:40.140755     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:28:44 old-k8s-version-223488 kubelet[712]: I0929 13:28:44.139949     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:28:44 old-k8s-version-223488 kubelet[712]: E0929 13:28:44.140401     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:28:49 old-k8s-version-223488 kubelet[712]: E0929 13:28:49.141060     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:28:54 old-k8s-version-223488 kubelet[712]: E0929 13:28:54.141241     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:28:59 old-k8s-version-223488 kubelet[712]: I0929 13:28:59.139554     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:28:59 old-k8s-version-223488 kubelet[712]: E0929 13:28:59.139928     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:29:00 old-k8s-version-223488 kubelet[712]: E0929 13:29:00.141325     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:29:09 old-k8s-version-223488 kubelet[712]: E0929 13:29:09.140586     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:29:10 old-k8s-version-223488 kubelet[712]: I0929 13:29:10.140251     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:29:10 old-k8s-version-223488 kubelet[712]: E0929 13:29:10.140668     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:29:15 old-k8s-version-223488 kubelet[712]: E0929 13:29:15.140859     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	Sep 29 13:29:23 old-k8s-version-223488 kubelet[712]: I0929 13:29:23.140300     712 scope.go:117] "RemoveContainer" containerID="0833985586d471a9d87b2265b33a3d633151ccfffba8c892b427bae2389a3bd1"
	Sep 29 13:29:23 old-k8s-version-223488 kubelet[712]: E0929 13:29:23.140704     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-sm4lt_kubernetes-dashboard(f9276cd9-efc4-4e03-a4a5-a18aa7ec3674)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-sm4lt" podUID="f9276cd9-efc4-4e03-a4a5-a18aa7ec3674"
	Sep 29 13:29:23 old-k8s-version-223488 kubelet[712]: E0929 13:29:23.141648     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cmxv5" podUID="c0acb856-6cc3-4baa-b0d8-dd82d6de83d3"
	Sep 29 13:29:28 old-k8s-version-223488 kubelet[712]: E0929 13:29:28.141521     712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg4cr" podUID="2a3f7370-a761-486c-993f-c0a0cc93ce6b"
	
	
	==> storage-provisioner [6350254ce867f1801e14d2a1ff83cd80c271543e49f2885304e1f0d47425adda] <==
	I0929 13:10:46.637609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:16.641483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f1080a53e734ed2fc814679a4192cbd38ed15d4cab74d67f852ef3d4759cc815] <==
	I0929 13:11:17.350350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 13:11:17.359169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 13:11:17.359220       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 13:11:34.757171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 13:11:34.757340       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab!
	I0929 13:11:34.757322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be1543b8-78ff-45f5-b24f-0db84f9fdd32", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab became leader
	I0929 13:11:34.857600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-223488_02129fe9-6bbb-409a-91e5-b305fbe139ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-223488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr: exit status 1 (65.366113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cmxv5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-gg4cr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-223488 describe pod metrics-server-57f55c9bc5-cmxv5 kubernetes-dashboard-8694d4445c-gg4cr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dls8" [aae6c127-73bd-4658-8206-ab662eaea2b1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:22:12.494966  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:58.453737  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:23:15.385784  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:29:45.793998067 +0000 UTC m=+3856.094034527
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-929827 describe po kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-929827 describe po kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-9dls8
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-929827/192.168.103.2
Start Time:       Mon, 29 Sep 2025 13:11:11 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8mhlf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-8mhlf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8 to no-preload-929827
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m23s (x48 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m47s (x51 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard: exit status 1 (80.717314ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-9dls8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-929827 logs kubernetes-dashboard-855c9754f9-9dls8 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-929827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-929827
helpers_test.go:243: (dbg) docker inspect no-preload-929827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac",
	        "Created": "2025-09-29T13:09:36.134872723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 817261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:58.068596599Z",
	            "FinishedAt": "2025-09-29T13:10:57.197117344Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/hostname",
	        "HostsPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/hosts",
	        "LogPath": "/var/lib/docker/containers/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac/143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac-json.log",
	        "Name": "/no-preload-929827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-929827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-929827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "143d78ecaef5fe4cf78b4cb655df74aa2e6ad70ca10135976940e6575200bcac",
	                "LowerDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d54ef0a75c6fc423e353a65fb8436c813495860380aa6c5111b915c9ea514a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-929827",
	                "Source": "/var/lib/docker/volumes/no-preload-929827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-929827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-929827",
	                "name.minikube.sigs.k8s.io": "no-preload-929827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5e0205b877974f862bf692adf980537493b00dd53d07253c81b9026c2e99739",
	            "SandboxKey": "/var/run/docker/netns/d5e0205b8779",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-929827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:c4:46:31:dc:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "df408269424551a4f38c50a43890d5ab69bd7640c4c8f425e46136888332a1e7",
	                    "EndpointID": "fefb57f53176d4c31f4392a8dcd3b010959999cdbd71ae0500a3e93debb86f54",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-929827",
	                        "143d78ecaef5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-929827 -n no-preload-929827
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-929827 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-929827 logs -n 25: (1.396497547s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-929827 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-929827            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	│ start   │ -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p cert-expiration-171552                                                                                                                                                                                                                     │ cert-expiration-171552       │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p kubernetes-upgrade-300182                                                                                                                                                                                                                  │ kubernetes-upgrade-300182    │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ delete  │ -p disable-driver-mounts-707559                                                                                                                                                                                                               │ disable-driver-mounts-707559 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:12 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:12 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p embed-certs-144376 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ stop    │ -p default-k8s-diff-port-504443 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ start   │ -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-144376           │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ start   │ -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-504443 │ jenkins │ v1.37.0 │ 29 Sep 25 13:14 UTC │ 29 Sep 25 13:14 UTC │
	│ image   │ old-k8s-version-223488 image list --format=json                                                                                                                                                                                               │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ pause   │ -p old-k8s-version-223488 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ unpause │ -p old-k8s-version-223488 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p old-k8s-version-223488                                                                                                                                                                                                                     │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p old-k8s-version-223488                                                                                                                                                                                                                     │ old-k8s-version-223488       │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ start   │ -p newest-cni-597617 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-597617            │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:29:36
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:29:36.904636  853456 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:29:36.904910  853456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:29:36.904919  853456 out.go:374] Setting ErrFile to fd 2...
	I0929 13:29:36.904923  853456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:29:36.905134  853456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:29:36.905693  853456 out.go:368] Setting JSON to false
	I0929 13:29:36.907029  853456 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11522,"bootTime":1759141055,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:29:36.907178  853456 start.go:140] virtualization: kvm guest
	I0929 13:29:36.909688  853456 out.go:179] * [newest-cni-597617] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:29:36.911026  853456 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:29:36.911027  853456 notify.go:220] Checking for updates...
	I0929 13:29:36.913875  853456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:29:36.915626  853456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:29:36.917022  853456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:29:36.918384  853456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:29:36.919647  853456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:29:36.921715  853456 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:29:36.921817  853456 config.go:182] Loaded profile config "embed-certs-144376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:29:36.921955  853456 config.go:182] Loaded profile config "no-preload-929827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:29:36.922079  853456 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:29:36.949491  853456 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:29:36.949609  853456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:29:37.012952  853456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:29:36.998162615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:29:37.013133  853456 docker.go:318] overlay module found
	I0929 13:29:37.015197  853456 out.go:179] * Using the docker driver based on user configuration
	I0929 13:29:37.016520  853456 start.go:304] selected driver: docker
	I0929 13:29:37.016536  853456 start.go:924] validating driver "docker" against <nil>
	I0929 13:29:37.016550  853456 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:29:37.017223  853456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:29:37.075006  853456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:29:37.064133615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:29:37.075237  853456 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0929 13:29:37.075273  853456 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0929 13:29:37.075541  853456 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:29:37.077981  853456 out.go:179] * Using Docker driver with root privileges
	I0929 13:29:37.079239  853456 cni.go:84] Creating CNI manager for ""
	I0929 13:29:37.079320  853456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 13:29:37.079331  853456 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 13:29:37.079420  853456 start.go:348] cluster config:
	{Name:newest-cni-597617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-597617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:29:37.081186  853456 out.go:179] * Starting "newest-cni-597617" primary control-plane node in "newest-cni-597617" cluster
	I0929 13:29:37.082559  853456 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:29:37.083949  853456 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:29:37.085560  853456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:29:37.085621  853456 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:29:37.085638  853456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:29:37.085644  853456 cache.go:58] Caching tarball of preloaded images
	I0929 13:29:37.085859  853456 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:29:37.085876  853456 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:29:37.086042  853456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/newest-cni-597617/config.json ...
	I0929 13:29:37.086080  853456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/newest-cni-597617/config.json: {Name:mk805f5699b8e967ab129fe5cb68bfe9e411ed74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:29:37.110375  853456 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:29:37.110398  853456 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:29:37.110415  853456 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:29:37.110447  853456 start.go:360] acquireMachinesLock for newest-cni-597617: {Name:mkff2f43ac4cfb2ce1d5f1ba19d983cd1692c556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:29:37.110547  853456 start.go:364] duration metric: took 83.61µs to acquireMachinesLock for "newest-cni-597617"
	I0929 13:29:37.110572  853456 start.go:93] Provisioning new machine with config: &{Name:newest-cni-597617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-597617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:29:37.110632  853456 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:29:37.112968  853456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:29:37.113273  853456 start.go:159] libmachine.API.Create for "newest-cni-597617" (driver="docker")
	I0929 13:29:37.113313  853456 client.go:168] LocalClient.Create starting
	I0929 13:29:37.113384  853456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem
	I0929 13:29:37.113420  853456 main.go:141] libmachine: Decoding PEM data...
	I0929 13:29:37.113437  853456 main.go:141] libmachine: Parsing certificate...
	I0929 13:29:37.113508  853456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem
	I0929 13:29:37.113531  853456 main.go:141] libmachine: Decoding PEM data...
	I0929 13:29:37.113547  853456 main.go:141] libmachine: Parsing certificate...
	I0929 13:29:37.113913  853456 cli_runner.go:164] Run: docker network inspect newest-cni-597617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:29:37.133536  853456 cli_runner.go:211] docker network inspect newest-cni-597617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:29:37.133647  853456 network_create.go:284] running [docker network inspect newest-cni-597617] to gather additional debugging logs...
	I0929 13:29:37.133687  853456 cli_runner.go:164] Run: docker network inspect newest-cni-597617
	W0929 13:29:37.153976  853456 cli_runner.go:211] docker network inspect newest-cni-597617 returned with exit code 1
	I0929 13:29:37.154013  853456 network_create.go:287] error running [docker network inspect newest-cni-597617]: docker network inspect newest-cni-597617: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-597617 not found
	I0929 13:29:37.154030  853456 network_create.go:289] output of [docker network inspect newest-cni-597617]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-597617 not found
	
	** /stderr **
	I0929 13:29:37.154167  853456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:29:37.174161  853456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-658937e2822f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:db:59:32:33:14} reservation:<nil>}
	I0929 13:29:37.175042  853456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0aedf79fab3f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:00:40:22:c0:9d} reservation:<nil>}
	I0929 13:29:37.175835  853456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4e6b729de02 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:90:ed:5e:c1:cf} reservation:<nil>}
	I0929 13:29:37.176499  853456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f5b4e4a14093 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:71:86:c8:61:29} reservation:<nil>}
	I0929 13:29:37.177166  853456 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a07eab15133 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:ea:a5:28:87:6b} reservation:<nil>}
	I0929 13:29:37.178101  853456 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de1ba0}
	I0929 13:29:37.178145  853456 network_create.go:124] attempt to create docker network newest-cni-597617 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0929 13:29:37.178208  853456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-597617 newest-cni-597617
	I0929 13:29:37.243596  853456 network_create.go:108] docker network newest-cni-597617 192.168.94.0/24 created
	I0929 13:29:37.243641  853456 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-597617" container
	I0929 13:29:37.243719  853456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:29:37.264748  853456 cli_runner.go:164] Run: docker volume create newest-cni-597617 --label name.minikube.sigs.k8s.io=newest-cni-597617 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:29:37.285346  853456 oci.go:103] Successfully created a docker volume newest-cni-597617
	I0929 13:29:37.285447  853456 cli_runner.go:164] Run: docker run --rm --name newest-cni-597617-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-597617 --entrypoint /usr/bin/test -v newest-cni-597617:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:29:37.728265  853456 oci.go:107] Successfully prepared a docker volume newest-cni-597617
	I0929 13:29:37.728331  853456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:29:37.728362  853456 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:29:37.728433  853456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-597617:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:29:42.268750  853456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-597617:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.540155667s)
	I0929 13:29:42.268799  853456 kic.go:203] duration metric: took 4.540432573s to extract preloaded images to volume ...
	W0929 13:29:42.269213  853456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 13:29:42.269298  853456 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 13:29:42.269383  853456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:29:42.330781  853456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-597617 --name newest-cni-597617 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-597617 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-597617 --network newest-cni-597617 --ip 192.168.94.2 --volume newest-cni-597617:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:29:42.636708  853456 cli_runner.go:164] Run: docker container inspect newest-cni-597617 --format={{.State.Running}}
	I0929 13:29:42.657511  853456 cli_runner.go:164] Run: docker container inspect newest-cni-597617 --format={{.State.Status}}
	I0929 13:29:42.678150  853456 cli_runner.go:164] Run: docker exec newest-cni-597617 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:29:42.730316  853456 oci.go:144] the created container "newest-cni-597617" has a running status.
	I0929 13:29:42.730365  853456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa...
	I0929 13:29:43.062954  853456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:29:43.093522  853456 cli_runner.go:164] Run: docker container inspect newest-cni-597617 --format={{.State.Status}}
	I0929 13:29:43.117403  853456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:29:43.117424  853456 kic_runner.go:114] Args: [docker exec --privileged newest-cni-597617 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:29:43.172550  853456 cli_runner.go:164] Run: docker container inspect newest-cni-597617 --format={{.State.Status}}
	I0929 13:29:43.193538  853456 machine.go:93] provisionDockerMachine start ...
	I0929 13:29:43.193636  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:43.214584  853456 main.go:141] libmachine: Using SSH client type: native
	I0929 13:29:43.214860  853456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0929 13:29:43.214893  853456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:29:43.361022  853456 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-597617
	
	I0929 13:29:43.361063  853456 ubuntu.go:182] provisioning hostname "newest-cni-597617"
	I0929 13:29:43.361152  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:43.384628  853456 main.go:141] libmachine: Using SSH client type: native
	I0929 13:29:43.384943  853456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0929 13:29:43.384966  853456 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-597617 && echo "newest-cni-597617" | sudo tee /etc/hostname
	I0929 13:29:43.544268  853456 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-597617
	
	I0929 13:29:43.544380  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:43.564701  853456 main.go:141] libmachine: Using SSH client type: native
	I0929 13:29:43.564966  853456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0929 13:29:43.565014  853456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-597617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-597617/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-597617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:29:43.708020  853456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:29:43.708085  853456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:29:43.708156  853456 ubuntu.go:190] setting up certificates
	I0929 13:29:43.708172  853456 provision.go:84] configureAuth start
	I0929 13:29:43.708246  853456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-597617
	I0929 13:29:43.728803  853456 provision.go:143] copyHostCerts
	I0929 13:29:43.728879  853456 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:29:43.728934  853456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:29:43.729049  853456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:29:43.729187  853456 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:29:43.729203  853456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:29:43.729249  853456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:29:43.729352  853456 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:29:43.729363  853456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:29:43.729412  853456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:29:43.729493  853456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.newest-cni-597617 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-597617]
	I0929 13:29:44.139779  853456 provision.go:177] copyRemoteCerts
	I0929 13:29:44.139846  853456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:29:44.139898  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.159939  853456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa Username:docker}
	I0929 13:29:44.261280  853456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:29:44.294170  853456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:29:44.325293  853456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:29:44.354910  853456 provision.go:87] duration metric: took 646.715324ms to configureAuth
	I0929 13:29:44.354950  853456 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:29:44.355173  853456 config.go:182] Loaded profile config "newest-cni-597617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:29:44.355318  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.377181  853456 main.go:141] libmachine: Using SSH client type: native
	I0929 13:29:44.377439  853456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0929 13:29:44.377464  853456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:29:44.634705  853456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:29:44.634738  853456 machine.go:96] duration metric: took 1.441175847s to provisionDockerMachine
	I0929 13:29:44.634750  853456 client.go:171] duration metric: took 7.521430851s to LocalClient.Create
	I0929 13:29:44.634771  853456 start.go:167] duration metric: took 7.521500838s to libmachine.API.Create "newest-cni-597617"
	I0929 13:29:44.634779  853456 start.go:293] postStartSetup for "newest-cni-597617" (driver="docker")
	I0929 13:29:44.634791  853456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:29:44.634858  853456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:29:44.634931  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.655995  853456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa Username:docker}
	I0929 13:29:44.759295  853456 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:29:44.763843  853456 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:29:44.763908  853456 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:29:44.763922  853456 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:29:44.763936  853456 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:29:44.763960  853456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:29:44.764022  853456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:29:44.764120  853456 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:29:44.764222  853456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:29:44.774971  853456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:29:44.807348  853456 start.go:296] duration metric: took 172.552442ms for postStartSetup
	I0929 13:29:44.807736  853456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-597617
	I0929 13:29:44.827381  853456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/newest-cni-597617/config.json ...
	I0929 13:29:44.827675  853456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:29:44.827720  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.847585  853456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa Username:docker}
	I0929 13:29:44.944694  853456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:29:44.949853  853456 start.go:128] duration metric: took 7.839200503s to createHost
	I0929 13:29:44.949904  853456 start.go:83] releasing machines lock for "newest-cni-597617", held for 7.839343361s
	I0929 13:29:44.949992  853456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-597617
	I0929 13:29:44.969597  853456 ssh_runner.go:195] Run: cat /version.json
	I0929 13:29:44.969663  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.969664  853456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:29:44.969863  853456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-597617
	I0929 13:29:44.989696  853456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa Username:docker}
	I0929 13:29:44.990143  853456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/newest-cni-597617/id_rsa Username:docker}
	I0929 13:29:45.164193  853456 ssh_runner.go:195] Run: systemctl --version
	I0929 13:29:45.170229  853456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:29:45.318157  853456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:29:45.323485  853456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:29:45.350144  853456 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:29:45.350254  853456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:29:45.386737  853456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:29:45.386773  853456 start.go:495] detecting cgroup driver to use...
	I0929 13:29:45.386810  853456 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:29:45.386872  853456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:29:45.405168  853456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:29:45.418853  853456 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:29:45.418936  853456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:29:45.434727  853456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:29:45.453510  853456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:29:45.539379  853456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:29:45.625056  853456 docker.go:234] disabling docker service ...
	I0929 13:29:45.625123  853456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:29:45.646085  853456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:29:45.660601  853456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:29:45.754860  853456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:29:45.883050  853456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:29:45.896781  853456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:29:45.917271  853456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:29:45.917361  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:45.932961  853456 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:29:45.933042  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:45.946176  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:45.958642  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:45.971462  853456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:29:45.984020  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:45.996558  853456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:46.019274  853456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:29:46.032142  853456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:29:46.044048  853456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:29:46.055136  853456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:29:46.181006  853456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:29:46.281659  853456 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:29:46.281745  853456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:29:46.286130  853456 start.go:563] Will wait 60s for crictl version
	I0929 13:29:46.286198  853456 ssh_runner.go:195] Run: which crictl
	I0929 13:29:46.290454  853456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:29:46.337508  853456 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:29:46.337616  853456 ssh_runner.go:195] Run: crio --version
	I0929 13:29:46.380120  853456 ssh_runner.go:195] Run: crio --version
	I0929 13:29:46.426685  853456 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:29:46.428149  853456 cli_runner.go:164] Run: docker network inspect newest-cni-597617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:29:46.448782  853456 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 13:29:46.453669  853456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:29:46.470243  853456 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Sep 29 13:28:30 no-preload-929827 crio[562]: time="2025-09-29 13:28:30.141965562Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3cf17111-7b6f-40ee-9ea8-ae9538090af3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:34 no-preload-929827 crio[562]: time="2025-09-29 13:28:34.141688599Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=743f8aa4-b287-4132-b8b0-6e57bfefce6d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:34 no-preload-929827 crio[562]: time="2025-09-29 13:28:34.141973528Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=743f8aa4-b287-4132-b8b0-6e57bfefce6d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:45 no-preload-929827 crio[562]: time="2025-09-29 13:28:45.142288796Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=773ca058-3498-4ed6-87c8-951c76b6d7bc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:45 no-preload-929827 crio[562]: time="2025-09-29 13:28:45.142671577Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=773ca058-3498-4ed6-87c8-951c76b6d7bc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:49 no-preload-929827 crio[562]: time="2025-09-29 13:28:49.141786741Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=05dc324a-aaad-453e-8521-5dd79783dd77 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:49 no-preload-929827 crio[562]: time="2025-09-29 13:28:49.142077942Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=05dc324a-aaad-453e-8521-5dd79783dd77 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:56 no-preload-929827 crio[562]: time="2025-09-29 13:28:56.141950047Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=55f7da6d-adc6-419e-8a32-67ece589bc4c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:28:56 no-preload-929827 crio[562]: time="2025-09-29 13:28:56.142229034Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=55f7da6d-adc6-419e-8a32-67ece589bc4c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:00 no-preload-929827 crio[562]: time="2025-09-29 13:29:00.142233392Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=be9a27c7-00ac-49f3-923a-e524aec9794d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:00 no-preload-929827 crio[562]: time="2025-09-29 13:29:00.142531356Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=be9a27c7-00ac-49f3-923a-e524aec9794d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:09 no-preload-929827 crio[562]: time="2025-09-29 13:29:09.142095460Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=73aefa08-2a6d-4c6a-8f8c-af8effd81fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:09 no-preload-929827 crio[562]: time="2025-09-29 13:29:09.142473584Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=73aefa08-2a6d-4c6a-8f8c-af8effd81fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:11 no-preload-929827 crio[562]: time="2025-09-29 13:29:11.141924390Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e0b1a077-0eb8-4db8-95f6-3241595451d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:11 no-preload-929827 crio[562]: time="2025-09-29 13:29:11.142228255Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e0b1a077-0eb8-4db8-95f6-3241595451d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:22 no-preload-929827 crio[562]: time="2025-09-29 13:29:22.141840560Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f6e4eaa8-9534-4d74-a185-742e37543aa1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:22 no-preload-929827 crio[562]: time="2025-09-29 13:29:22.142304140Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f6e4eaa8-9534-4d74-a185-742e37543aa1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:24 no-preload-929827 crio[562]: time="2025-09-29 13:29:24.141204190Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e29d6039-45f7-4574-ab2f-a82a1d4fecfb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:24 no-preload-929827 crio[562]: time="2025-09-29 13:29:24.141430336Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e29d6039-45f7-4574-ab2f-a82a1d4fecfb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:33 no-preload-929827 crio[562]: time="2025-09-29 13:29:33.141822536Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5b5ce3ae-3378-4b07-84c0-e31067379369 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:33 no-preload-929827 crio[562]: time="2025-09-29 13:29:33.142148304Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5b5ce3ae-3378-4b07-84c0-e31067379369 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:39 no-preload-929827 crio[562]: time="2025-09-29 13:29:39.142032229Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a932289e-511b-4ad7-80d6-c1811823f924 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:39 no-preload-929827 crio[562]: time="2025-09-29 13:29:39.142332560Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a932289e-511b-4ad7-80d6-c1811823f924 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:46 no-preload-929827 crio[562]: time="2025-09-29 13:29:46.141793135Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=672b1ec6-1258-43b5-a357-bdcbb02220ed name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:29:46 no-preload-929827 crio[562]: time="2025-09-29 13:29:46.142201613Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=672b1ec6-1258-43b5-a357-bdcbb02220ed name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	360f455eaf549       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   fb718710d4565       dashboard-metrics-scraper-6ffb444bf9-vf7bg
	c83a8bad7ddf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   cd69e02213daa       storage-provisioner
	8fa465feaff34       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   6ea557efa4c5c       coredns-66bc5c9577-w9q72
	3d462f220f279       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   6c270d20e3a07       busybox
	a1328a5fb4884       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   1ba97ec482053       kindnet-q7vkx
	49f4eabe0b833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   cd69e02213daa       storage-provisioner
	96f6608315031       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   b9f9d50dd0b9d       kube-proxy-hxs55
	24ab90d24d8cc       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   2fd2fc10d5f25       kube-controller-manager-no-preload-929827
	f91e471fbeff1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   9701e4244e31e       kube-scheduler-no-preload-929827
	4f40a87ba6d97       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   fd6ef2e726ba2       etcd-no-preload-929827
	6bd50ae447d36       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   a97fb7662f9ca       kube-apiserver-no-preload-929827
	
	
	==> coredns [8fa465feaff34d461599f88d30ba96936af260e889f95893cd7a4b5ac8ddf10f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38086 - 2042 "HINFO IN 5773216621506957702.4151051612224799096. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099920741s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-929827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-929827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-929827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:10:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-929827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:29:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:28:39 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:28:39 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:28:39 +0000   Mon, 29 Sep 2025 13:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:28:39 +0000   Mon, 29 Sep 2025 13:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-929827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7490fe7b8e6c48fdbf612d06b66fe080
	  System UUID:                f34f8961-8004-415b-80a2-8959d9202514
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-w9q72                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-929827                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-q7vkx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-929827              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-929827     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-hxs55                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-929827              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-wf2g9               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vf7bg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9dls8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-929827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-929827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-929827 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-929827 event: Registered Node no-preload-929827 in Controller
	  Normal  NodeReady                19m                kubelet          Node no-preload-929827 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-929827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-929827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node no-preload-929827 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-929827 event: Registered Node no-preload-929827 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [4f40a87ba6d978785059adb6668c6f202a689264a19faec6e454909ae17ce1d2] <==
	{"level":"warn","ts":"2025-09-29T13:11:07.179461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.186361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.193546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.200361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.207300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.214990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.238924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.245450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.252281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:07.305091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:17.118099Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.788768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:17.118200Z","caller":"traceutil/trace.go:172","msg":"trace[963470827] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:705; }","duration":"185.899327ms","start":"2025-09-29T13:12:16.932287Z","end":"2025-09-29T13:12:17.118187Z","steps":["trace[963470827] 'range keys from in-memory index tree'  (duration: 185.708627ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:12:17.118069Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.544923ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:17.118275Z","caller":"traceutil/trace.go:172","msg":"trace[1903341512] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:705; }","duration":"108.768336ms","start":"2025-09-29T13:12:17.009492Z","end":"2025-09-29T13:12:17.118260Z","steps":["trace[1903341512] 'range keys from in-memory index tree'  (duration: 108.503547ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:56.373385Z","caller":"traceutil/trace.go:172","msg":"trace[169106305] transaction","detail":"{read_only:false; response_revision:755; number_of_response:1; }","duration":"224.492701ms","start":"2025-09-29T13:12:56.148875Z","end":"2025-09-29T13:12:56.373368Z","steps":["trace[169106305] 'process raft request'  (duration: 137.077668ms)","trace[169106305] 'compare'  (duration: 87.32589ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:12:57.023342Z","caller":"traceutil/trace.go:172","msg":"trace[1751101268] transaction","detail":"{read_only:false; response_revision:757; number_of_response:1; }","duration":"222.098886ms","start":"2025-09-29T13:12:56.801227Z","end":"2025-09-29T13:12:57.023326Z","steps":["trace[1751101268] 'process raft request'  (duration: 221.803733ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:12:57.247186Z","caller":"traceutil/trace.go:172","msg":"trace[831769672] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"191.435845ms","start":"2025-09-29T13:12:57.055734Z","end":"2025-09-29T13:12:57.247170Z","steps":["trace[831769672] 'process raft request'  (duration: 98.701642ms)","trace[831769672] 'compare'  (duration: 92.55563ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T13:12:57.555396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.800071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:12:57.555472Z","caller":"traceutil/trace.go:172","msg":"trace[89482518] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:758; }","duration":"189.894018ms","start":"2025-09-29T13:12:57.365565Z","end":"2025-09-29T13:12:57.555459Z","steps":["trace[89482518] 'range keys from in-memory index tree'  (duration: 189.7243ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:21:06.773085Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":991}
	{"level":"info","ts":"2025-09-29T13:21:06.779688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":991,"took":"6.318009ms","hash":42485670,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3153920,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-29T13:21:06.779732Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":42485670,"revision":991,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:26:06.778765Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1267}
	{"level":"info","ts":"2025-09-29T13:26:06.781620Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1267,"took":"2.49417ms","hash":2204646568,"current-db-size-bytes":3153920,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T13:26:06.781657Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2204646568,"revision":1267,"compact-revision":991}
	
	
	==> kernel <==
	 13:29:47 up  3:12,  0 users,  load average: 2.00, 0.88, 1.23
	Linux no-preload-929827 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a1328a5fb48841316a3fb17d07e53f9189c3c039511d5573c077bfc7bf1656b9] <==
	I0929 13:27:38.997466       1 main.go:301] handling current node
	I0929 13:27:49.003715       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:49.003748       1 main.go:301] handling current node
	I0929 13:27:58.999456       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:58.999496       1 main.go:301] handling current node
	I0929 13:28:08.998057       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:08.998123       1 main.go:301] handling current node
	I0929 13:28:19.005223       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:19.005263       1 main.go:301] handling current node
	I0929 13:28:28.998037       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:28.998108       1 main.go:301] handling current node
	I0929 13:28:38.998034       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:38.998070       1 main.go:301] handling current node
	I0929 13:28:48.997982       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:48.998013       1 main.go:301] handling current node
	I0929 13:28:59.005040       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:59.005074       1 main.go:301] handling current node
	I0929 13:29:09.006049       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:29:09.006084       1 main.go:301] handling current node
	I0929 13:29:19.004983       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:29:19.005019       1 main.go:301] handling current node
	I0929 13:29:28.996979       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:29:28.997034       1 main.go:301] handling current node
	I0929 13:29:39.000149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:29:39.000191       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bd50ae447d368f855a5139301d83275bd68e0a665d001d305b5ceb6bd1d7d7e] <==
	I0929 13:26:08.738288       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:26:13.825293       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:27:08.737238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:27:08.737317       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:27:08.737334       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:27:08.739461       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:27:08.739563       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:27:08.739586       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:27:18.306801       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:27:32.865267       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:28:20.256690       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:28:33.735778       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:29:08.738456       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:29:08.738517       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:29:08.738532       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:29:08.740680       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:29:08.740765       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:29:08.740777       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:29:43.286772       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [24ab90d24d8cc43671fbc38a60650dcac255ec255ecd0edb7e610546456099f7] <==
	I0929 13:23:41.334590       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:11.241874       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:11.341493       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:41.246600       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:41.349273       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:11.250690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:11.355900       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:41.254533       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:41.363059       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:11.259323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:11.369633       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:41.263336       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:41.376182       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:11.267671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:11.384190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:41.272532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:41.391020       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:11.276819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:11.397520       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:41.281513       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:41.405766       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:11.285582       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:11.413767       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:41.289956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:41.421641       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [96f6608315031a72b38bee0947b7434da1e1f451ea3e30db7e84f3293c7add36] <==
	I0929 13:11:08.676516       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:11:08.730961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:11:08.831763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:11:08.831802       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:11:08.831945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:11:08.855126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:11:08.855188       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:11:08.861688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:11:08.862143       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:11:08.862176       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:11:08.864119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:11:08.864139       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:11:08.864156       1 config.go:309] "Starting node config controller"
	I0929 13:11:08.864169       1 config.go:200] "Starting service config controller"
	I0929 13:11:08.864175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:11:08.864169       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:11:08.864198       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:11:08.864213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:11:08.964960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:11:08.964998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:11:08.965017       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:11:08.965032       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f91e471fbeff1a6805284409bb41c627dddaaa8d0182d3c0ecf575635e0c4555] <==
	I0929 13:11:06.408408       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:11:07.713146       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:11:07.713203       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:11:07.713215       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:11:07.713224       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:11:07.744610       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:11:07.744638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:11:07.746492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:11:07.746608       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:11:07.746925       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:11:07.747006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:11:07.847171       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:29:02 no-preload-929827 kubelet[696]: I0929 13:29:02.141112     696 scope.go:117] "RemoveContainer" containerID="360f455eaf549de6a48e0a0c75073bf4f1f77be55d827b4d6b8a447e4ba68421"
	Sep 29 13:29:02 no-preload-929827 kubelet[696]: E0929 13:29:02.141306     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:29:05 no-preload-929827 kubelet[696]: E0929 13:29:05.289297     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152545289052858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:05 no-preload-929827 kubelet[696]: E0929 13:29:05.289346     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152545289052858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:09 no-preload-929827 kubelet[696]: E0929 13:29:09.142848     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:29:11 no-preload-929827 kubelet[696]: E0929 13:29:11.142523     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:29:15 no-preload-929827 kubelet[696]: I0929 13:29:15.141649     696 scope.go:117] "RemoveContainer" containerID="360f455eaf549de6a48e0a0c75073bf4f1f77be55d827b4d6b8a447e4ba68421"
	Sep 29 13:29:15 no-preload-929827 kubelet[696]: E0929 13:29:15.141819     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:29:15 no-preload-929827 kubelet[696]: E0929 13:29:15.290857     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152555290599202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:15 no-preload-929827 kubelet[696]: E0929 13:29:15.290928     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152555290599202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:22 no-preload-929827 kubelet[696]: E0929 13:29:22.142693     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:29:24 no-preload-929827 kubelet[696]: E0929 13:29:24.141752     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:29:25 no-preload-929827 kubelet[696]: E0929 13:29:25.292417     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152565292166069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:25 no-preload-929827 kubelet[696]: E0929 13:29:25.292455     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152565292166069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:27 no-preload-929827 kubelet[696]: I0929 13:29:27.141390     696 scope.go:117] "RemoveContainer" containerID="360f455eaf549de6a48e0a0c75073bf4f1f77be55d827b4d6b8a447e4ba68421"
	Sep 29 13:29:27 no-preload-929827 kubelet[696]: E0929 13:29:27.141619     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:29:33 no-preload-929827 kubelet[696]: E0929 13:29:33.142509     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	Sep 29 13:29:35 no-preload-929827 kubelet[696]: E0929 13:29:35.293987     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152575293708556  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:35 no-preload-929827 kubelet[696]: E0929 13:29:35.294018     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152575293708556  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:38 no-preload-929827 kubelet[696]: I0929 13:29:38.141207     696 scope.go:117] "RemoveContainer" containerID="360f455eaf549de6a48e0a0c75073bf4f1f77be55d827b4d6b8a447e4ba68421"
	Sep 29 13:29:38 no-preload-929827 kubelet[696]: E0929 13:29:38.141480     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vf7bg_kubernetes-dashboard(91f0d0a2-4413-461f-9f6f-3c01de756195)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vf7bg" podUID="91f0d0a2-4413-461f-9f6f-3c01de756195"
	Sep 29 13:29:39 no-preload-929827 kubelet[696]: E0929 13:29:39.142693     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wf2g9" podUID="89ae7449-1b2f-4bef-a3f5-c33bd22e757f"
	Sep 29 13:29:45 no-preload-929827 kubelet[696]: E0929 13:29:45.295500     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152585295252714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:45 no-preload-929827 kubelet[696]: E0929 13:29:45.295531     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152585295252714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 13:29:46 no-preload-929827 kubelet[696]: E0929 13:29:46.142646     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dls8" podUID="aae6c127-73bd-4658-8206-ab662eaea2b1"
	
	
	==> storage-provisioner [49f4eabe0b833d137c7c6ba8f9503c33dce71d7c3d65115d837d5f6594f7ee8b] <==
	I0929 13:11:08.626369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:38.631834       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c83a8bad7ddf0c4db96542bb906f5eb729c7a0d1960ef2624cbdff59f7811750] <==
	W0929 13:29:23.292482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:25.296658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:25.302296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:27.304991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:27.309070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:29.312727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:29.317047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:31.320370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:31.324331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:33.329279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:33.334366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:35.337873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:35.343319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:37.346615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:37.351022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:39.353852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:39.364443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:41.367924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:41.431161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:43.434872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:43.441526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:45.445258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:45.450315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:47.453652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:47.458365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-929827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8: exit status 1 (67.88309ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-wf2g9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9dls8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-929827 describe pod metrics-server-746fcd58dc-wf2g9 kubernetes-dashboard-855c9754f9-9dls8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zmzj7" [3d7707ff-be06-433e-a8ea-a5478e606f81] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:32:42.636552582 +0000 UTC m=+4032.936589046
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-144376 describe po kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-144376 describe po kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-zmzj7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-144376/192.168.85.2
Start Time:       Mon, 29 Sep 2025 13:14:07 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5kf5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r5kf5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7 to embed-certs-144376
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m25s (x48 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m59s (x50 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard: exit status 1 (84.40149ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-zmzj7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-144376 logs kubernetes-dashboard-855c9754f9-zmzj7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-144376 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-144376
helpers_test.go:243: (dbg) docker inspect embed-certs-144376:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316",
	        "Created": "2025-09-29T13:12:18.279731139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 837752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:13:53.446183728Z",
	            "FinishedAt": "2025-09-29T13:13:52.534833272Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/hostname",
	        "HostsPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/hosts",
	        "LogPath": "/var/lib/docker/containers/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316/66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316-json.log",
	        "Name": "/embed-certs-144376",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-144376:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-144376",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66bd64cb0222c8f5e2e03f79d61485da4a693c9215d2e49c4a5482eb59c57316",
	                "LowerDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23b776890370bc1bad48d4c638d81280d056796a44867650ec94cb5a337d0e2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-144376",
	                "Source": "/var/lib/docker/volumes/embed-certs-144376/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-144376",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-144376",
	                "name.minikube.sigs.k8s.io": "embed-certs-144376",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "622f9248a0d3a1bfda6c0b8dbad3656d816d31cf4ff76fdea36ae38c0f1862fa",
	            "SandboxKey": "/var/run/docker/netns/622f9248a0d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-144376": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:1d:81:10:62:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a07eab151337df82bb396e6f2b16fcc57dcc4e80efb3e20e1c2d63c513de844",
	                    "EndpointID": "dd90e325237351bf982579accbb7cff937c3e35e2d74f5d34f09c0838c0f3f25",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-144376",
	                        "66bd64cb0222"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-144376 -n embed-certs-144376
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-144376 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-144376 logs -n 25: (1.456696579s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-411536 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo docker system info                                                                                                                          │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cri-dockerd --version                                                                                                                       │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo containerd config dump                                                                                                                      │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo crio config                                                                                                                                 │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ delete  │ -p kindnet-411536                                                                                                                                                  │ kindnet-411536        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ start   │ -p custom-flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-411536 │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:32:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:32:11.885422  882031 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:32:11.885686  882031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:32:11.885710  882031 out.go:374] Setting ErrFile to fd 2...
	I0929 13:32:11.885717  882031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:32:11.886141  882031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:32:11.886903  882031 out.go:368] Setting JSON to false
	I0929 13:32:11.888265  882031 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11677,"bootTime":1759141055,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:32:11.888399  882031 start.go:140] virtualization: kvm guest
	I0929 13:32:11.891279  882031 out.go:179] * [custom-flannel-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:32:11.893393  882031 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:32:11.893377  882031 notify.go:220] Checking for updates...
	I0929 13:32:11.895421  882031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:32:11.897624  882031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:32:11.899609  882031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:32:11.903277  882031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:32:11.905671  882031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:32:11.908401  882031 config.go:182] Loaded profile config "calico-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:11.908563  882031 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:11.908664  882031 config.go:182] Loaded profile config "embed-certs-144376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:11.908836  882031 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:32:11.940246  882031 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:32:11.940434  882031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:32:12.002662  882031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:32:11.99102788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:32:12.002811  882031 docker.go:318] overlay module found
	I0929 13:32:12.004967  882031 out.go:179] * Using the docker driver based on user configuration
	I0929 13:32:12.006736  882031 start.go:304] selected driver: docker
	I0929 13:32:12.006769  882031 start.go:924] validating driver "docker" against <nil>
	I0929 13:32:12.006784  882031 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:32:12.007623  882031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:32:12.073041  882031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:32:12.059564405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:32:12.073234  882031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:32:12.073477  882031 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:32:12.075641  882031 out.go:179] * Using Docker driver with root privileges
	I0929 13:32:12.077116  882031 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0929 13:32:12.077156  882031 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0929 13:32:12.077249  882031 start.go:348] cluster config:
	{Name:custom-flannel-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:32:12.078909  882031 out.go:179] * Starting "custom-flannel-411536" primary control-plane node in "custom-flannel-411536" cluster
	I0929 13:32:12.080469  882031 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:32:12.081977  882031 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:32:12.083456  882031 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:32:12.083524  882031 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:32:12.083542  882031 cache.go:58] Caching tarball of preloaded images
	I0929 13:32:12.083575  882031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:32:12.083703  882031 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:32:12.083721  882031 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:32:12.083827  882031 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/config.json ...
	I0929 13:32:12.083850  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/config.json: {Name:mkf044f617aedf0bdd0d75e1d212f75220691b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:12.108190  882031 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:32:12.108211  882031 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:32:12.108227  882031 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:32:12.108255  882031 start.go:360] acquireMachinesLock for custom-flannel-411536: {Name:mkc877977a36d8d46b99991277ebeeb31cca322d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:32:12.108374  882031 start.go:364] duration metric: took 91.918µs to acquireMachinesLock for "custom-flannel-411536"
	I0929 13:32:12.108399  882031 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:32:12.108467  882031 start.go:125] createHost starting for "" (driver="docker")
	W0929 13:32:10.522040  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:13.023118  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:12.110526  882031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:32:12.110809  882031 start.go:159] libmachine.API.Create for "custom-flannel-411536" (driver="docker")
	I0929 13:32:12.110849  882031 client.go:168] LocalClient.Create starting
	I0929 13:32:12.110943  882031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem
	I0929 13:32:12.111014  882031 main.go:141] libmachine: Decoding PEM data...
	I0929 13:32:12.111031  882031 main.go:141] libmachine: Parsing certificate...
	I0929 13:32:12.111098  882031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem
	I0929 13:32:12.111122  882031 main.go:141] libmachine: Decoding PEM data...
	I0929 13:32:12.111134  882031 main.go:141] libmachine: Parsing certificate...
	I0929 13:32:12.111625  882031 cli_runner.go:164] Run: docker network inspect custom-flannel-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:32:12.130367  882031 cli_runner.go:211] docker network inspect custom-flannel-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:32:12.130453  882031 network_create.go:284] running [docker network inspect custom-flannel-411536] to gather additional debugging logs...
	I0929 13:32:12.130479  882031 cli_runner.go:164] Run: docker network inspect custom-flannel-411536
	W0929 13:32:12.150253  882031 cli_runner.go:211] docker network inspect custom-flannel-411536 returned with exit code 1
	I0929 13:32:12.150305  882031 network_create.go:287] error running [docker network inspect custom-flannel-411536]: docker network inspect custom-flannel-411536: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-411536 not found
	I0929 13:32:12.150323  882031 network_create.go:289] output of [docker network inspect custom-flannel-411536]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-411536 not found
	
	** /stderr **
	I0929 13:32:12.150459  882031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:32:12.171062  882031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-658937e2822f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:db:59:32:33:14} reservation:<nil>}
	I0929 13:32:12.171763  882031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0aedf79fab3f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:00:40:22:c0:9d} reservation:<nil>}
	I0929 13:32:12.172608  882031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4e6b729de02 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:90:ed:5e:c1:cf} reservation:<nil>}
	I0929 13:32:12.173219  882031 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f5b4e4a14093 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:71:86:c8:61:29} reservation:<nil>}
	I0929 13:32:12.173826  882031 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a07eab15133 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:ea:a5:28:87:6b} reservation:<nil>}
	I0929 13:32:12.174742  882031 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec3740}
	I0929 13:32:12.174767  882031 network_create.go:124] attempt to create docker network custom-flannel-411536 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0929 13:32:12.174834  882031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-411536 custom-flannel-411536
	I0929 13:32:12.243425  882031 network_create.go:108] docker network custom-flannel-411536 192.168.94.0/24 created
	I0929 13:32:12.243468  882031 kic.go:121] calculated static IP "192.168.94.2" for the "custom-flannel-411536" container
	I0929 13:32:12.243568  882031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:32:12.263096  882031 cli_runner.go:164] Run: docker volume create custom-flannel-411536 --label name.minikube.sigs.k8s.io=custom-flannel-411536 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:32:12.283739  882031 oci.go:103] Successfully created a docker volume custom-flannel-411536
	I0929 13:32:12.283836  882031 cli_runner.go:164] Run: docker run --rm --name custom-flannel-411536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-411536 --entrypoint /usr/bin/test -v custom-flannel-411536:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:32:12.711606  882031 oci.go:107] Successfully prepared a docker volume custom-flannel-411536
	I0929 13:32:12.711653  882031 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:32:12.711676  882031 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:32:12.711751  882031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-411536:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	W0929 13:32:15.521855  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:18.022352  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:17.165713  882031 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-411536:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.453886114s)
	I0929 13:32:17.165755  882031 kic.go:203] duration metric: took 4.454074054s to extract preloaded images to volume ...
	W0929 13:32:17.165871  882031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 13:32:17.165957  882031 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 13:32:17.166010  882031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:32:17.223132  882031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-411536 --name custom-flannel-411536 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-411536 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-411536 --network custom-flannel-411536 --ip 192.168.94.2 --volume custom-flannel-411536:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:32:17.540170  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Running}}
	I0929 13:32:17.560129  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:17.581451  882031 cli_runner.go:164] Run: docker exec custom-flannel-411536 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:32:17.633265  882031 oci.go:144] the created container "custom-flannel-411536" has a running status.
	I0929 13:32:17.633312  882031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa...
	I0929 13:32:17.794581  882031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:32:17.834220  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:17.858414  882031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:32:17.858441  882031 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-411536 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:32:17.911276  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:17.934416  882031 machine.go:93] provisionDockerMachine start ...
	I0929 13:32:17.934527  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:17.955347  882031 main.go:141] libmachine: Using SSH client type: native
	I0929 13:32:17.955613  882031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I0929 13:32:17.955633  882031 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:32:18.100066  882031 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-411536
	
	I0929 13:32:18.100106  882031 ubuntu.go:182] provisioning hostname "custom-flannel-411536"
	I0929 13:32:18.100198  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:18.121366  882031 main.go:141] libmachine: Using SSH client type: native
	I0929 13:32:18.121617  882031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I0929 13:32:18.121637  882031 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-411536 && echo "custom-flannel-411536" | sudo tee /etc/hostname
	I0929 13:32:18.278435  882031 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-411536
	
	I0929 13:32:18.278527  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:18.298926  882031 main.go:141] libmachine: Using SSH client type: native
	I0929 13:32:18.299203  882031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I0929 13:32:18.299226  882031 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-411536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-411536/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-411536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:32:18.442177  882031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:32:18.442214  882031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:32:18.442257  882031 ubuntu.go:190] setting up certificates
	I0929 13:32:18.442269  882031 provision.go:84] configureAuth start
	I0929 13:32:18.442335  882031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-411536
	I0929 13:32:18.462355  882031 provision.go:143] copyHostCerts
	I0929 13:32:18.462422  882031 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:32:18.462431  882031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:32:18.462510  882031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:32:18.462660  882031 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:32:18.462675  882031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:32:18.462706  882031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:32:18.462767  882031 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:32:18.462774  882031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:32:18.462798  882031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:32:18.462865  882031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-411536 san=[127.0.0.1 192.168.94.2 custom-flannel-411536 localhost minikube]
	I0929 13:32:18.663327  882031 provision.go:177] copyRemoteCerts
	I0929 13:32:18.663394  882031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:32:18.663431  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:18.682457  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:18.783230  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:32:18.815046  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0929 13:32:18.843544  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:32:18.873661  882031 provision.go:87] duration metric: took 431.377433ms to configureAuth
	I0929 13:32:18.873693  882031 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:32:18.873845  882031 config.go:182] Loaded profile config "custom-flannel-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:18.873978  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:18.894289  882031 main.go:141] libmachine: Using SSH client type: native
	I0929 13:32:18.894517  882031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I0929 13:32:18.894535  882031 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:32:19.148701  882031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:32:19.148745  882031 machine.go:96] duration metric: took 1.214285704s to provisionDockerMachine
	I0929 13:32:19.148759  882031 client.go:171] duration metric: took 7.037903895s to LocalClient.Create
	I0929 13:32:19.148784  882031 start.go:167] duration metric: took 7.037978136s to libmachine.API.Create "custom-flannel-411536"
	I0929 13:32:19.148798  882031 start.go:293] postStartSetup for "custom-flannel-411536" (driver="docker")
	I0929 13:32:19.148812  882031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:32:19.148877  882031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:32:19.148962  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:19.168958  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:19.272209  882031 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:32:19.276256  882031 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:32:19.276316  882031 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:32:19.276330  882031 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:32:19.276339  882031 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:32:19.276352  882031 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:32:19.276407  882031 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:32:19.276520  882031 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:32:19.276624  882031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:32:19.286966  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:32:19.320457  882031 start.go:296] duration metric: took 171.639592ms for postStartSetup
	I0929 13:32:19.320848  882031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-411536
	I0929 13:32:19.340188  882031 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/config.json ...
	I0929 13:32:19.340507  882031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:32:19.340563  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:19.361380  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:19.459213  882031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:32:19.464648  882031 start.go:128] duration metric: took 7.356161692s to createHost
	I0929 13:32:19.464680  882031 start.go:83] releasing machines lock for "custom-flannel-411536", held for 7.356294413s
	I0929 13:32:19.464750  882031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-411536
	I0929 13:32:19.483596  882031 ssh_runner.go:195] Run: cat /version.json
	I0929 13:32:19.483645  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:19.483678  882031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:32:19.483764  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:19.503721  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:19.503901  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:19.600876  882031 ssh_runner.go:195] Run: systemctl --version
	I0929 13:32:19.675565  882031 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:32:19.823670  882031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:32:19.829404  882031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:32:19.859268  882031 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:32:19.859360  882031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:32:19.894359  882031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:32:19.894385  882031 start.go:495] detecting cgroup driver to use...
	I0929 13:32:19.894420  882031 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:32:19.894481  882031 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:32:19.912721  882031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:32:19.927004  882031 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:32:19.927069  882031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:32:19.942529  882031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:32:19.959465  882031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:32:20.036968  882031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:32:20.118486  882031 docker.go:234] disabling docker service ...
	I0929 13:32:20.118549  882031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:32:20.140113  882031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:32:20.154216  882031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:32:20.228751  882031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:32:20.367758  882031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:32:20.382565  882031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:32:20.402125  882031 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:32:20.402185  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.416828  882031 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:32:20.416903  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.429038  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.440703  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.452875  882031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:32:20.464411  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.476488  882031 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.497994  882031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:32:20.510097  882031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:32:20.520666  882031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:32:20.531221  882031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:32:20.646340  882031 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:32:20.746176  882031 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:32:20.746262  882031 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:32:20.750607  882031 start.go:563] Will wait 60s for crictl version
	I0929 13:32:20.750667  882031 ssh_runner.go:195] Run: which crictl
	I0929 13:32:20.755004  882031 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:32:20.795331  882031 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:32:20.795433  882031 ssh_runner.go:195] Run: crio --version
	I0929 13:32:20.835349  882031 ssh_runner.go:195] Run: crio --version
	I0929 13:32:20.878982  882031 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:32:20.880419  882031 cli_runner.go:164] Run: docker network inspect custom-flannel-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:32:20.899711  882031 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 13:32:20.904411  882031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:32:20.919040  882031 kubeadm.go:875] updating cluster {Name:custom-flannel-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:32:20.919154  882031 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:32:20.919206  882031 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:32:20.994399  882031 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:32:20.994422  882031 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:32:20.994472  882031 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:32:21.033931  882031 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:32:21.034014  882031 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:32:21.034041  882031 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 crio true true} ...
	I0929 13:32:21.034144  882031 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-411536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0929 13:32:21.034216  882031 ssh_runner.go:195] Run: crio config
	I0929 13:32:21.084522  882031 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0929 13:32:21.084572  882031 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:32:21.084608  882031 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-411536 NodeName:custom-flannel-411536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:32:21.084775  882031 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-411536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:32:21.084840  882031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:32:21.096989  882031 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:32:21.097081  882031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:32:21.107613  882031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0929 13:32:21.129518  882031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:32:21.155234  882031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 13:32:21.176483  882031 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:32:21.180870  882031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:32:21.194267  882031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:32:21.270127  882031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:32:21.296246  882031 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536 for IP: 192.168.94.2
	I0929 13:32:21.296272  882031 certs.go:194] generating shared ca certs ...
	I0929 13:32:21.296295  882031 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:21.296497  882031 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:32:21.296551  882031 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:32:21.296565  882031 certs.go:256] generating profile certs ...
	I0929 13:32:21.296640  882031 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.key
	I0929 13:32:21.296658  882031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.crt with IP's: []
	I0929 13:32:21.447316  882031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.crt ...
	I0929 13:32:21.447347  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.crt: {Name:mk95fa9efafd9dfdca2f5f40c38b09edff152978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:21.447546  882031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.key ...
	I0929 13:32:21.447560  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/client.key: {Name:mk62eb30a97d9aca7fe6f5eafd3f60689c32c48c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:21.447669  882031 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key.9f85a2af
	I0929 13:32:21.447689  882031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt.9f85a2af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0929 13:32:21.777775  882031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt.9f85a2af ...
	I0929 13:32:21.777808  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt.9f85a2af: {Name:mk79b82a0c12495c488b6c231d760698f68e387a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:21.778030  882031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key.9f85a2af ...
	I0929 13:32:21.778051  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key.9f85a2af: {Name:mk1d8ac1ea84f76a0e49fef79f4c47ae3380bc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:21.778142  882031 certs.go:381] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt.9f85a2af -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt
	I0929 13:32:21.778222  882031 certs.go:385] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key.9f85a2af -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key
	I0929 13:32:21.778283  882031 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.key
	I0929 13:32:21.778300  882031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.crt with IP's: []
	I0929 13:32:22.101147  882031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.crt ...
	I0929 13:32:22.101178  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.crt: {Name:mkd9ad3e7e2b3e1e82286b36864e2bdc0006cfa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:22.101370  882031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.key ...
	I0929 13:32:22.101384  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.key: {Name:mka40584d03a2551a2c1d93595f15b0dec1bf5af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:22.101566  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:32:22.101602  882031 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:32:22.101613  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:32:22.101635  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:32:22.101658  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:32:22.101678  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:32:22.101720  882031 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:32:22.102383  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:32:22.131496  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:32:22.159640  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:32:22.189509  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:32:22.220308  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0929 13:32:22.249483  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:32:22.278738  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:32:22.307462  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/custom-flannel-411536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 13:32:22.336472  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:32:22.368006  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:32:22.397491  882031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:32:22.427535  882031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:32:22.449566  882031 ssh_runner.go:195] Run: openssl version
	I0929 13:32:22.456485  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:32:22.468699  882031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:32:22.473174  882031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:32:22.473234  882031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:32:22.480732  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:32:22.492497  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:32:22.504755  882031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:32:22.509338  882031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:32:22.509414  882031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:32:22.517650  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:32:22.529957  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:32:22.542052  882031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:32:22.546769  882031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:32:22.546834  882031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:32:22.554936  882031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:32:22.567773  882031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:32:22.572265  882031 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:32:22.572331  882031 kubeadm.go:392] StartCluster: {Name:custom-flannel-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:32:22.572420  882031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:32:22.572494  882031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:32:22.612957  882031 cri.go:89] found id: ""
	I0929 13:32:22.613048  882031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:32:22.624006  882031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 13:32:22.634902  882031 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 13:32:22.634955  882031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 13:32:22.645344  882031 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 13:32:22.645364  882031 kubeadm.go:157] found existing configuration files:
	
	I0929 13:32:22.645406  882031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 13:32:22.655549  882031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 13:32:22.655621  882031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 13:32:22.666554  882031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 13:32:22.677292  882031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 13:32:22.677366  882031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 13:32:22.688123  882031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 13:32:22.699320  882031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 13:32:22.699387  882031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 13:32:22.709596  882031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 13:32:22.720244  882031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 13:32:22.720323  882031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 13:32:22.730992  882031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 13:32:22.775055  882031 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 13:32:22.775107  882031 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 13:32:22.794087  882031 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 13:32:22.794169  882031 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 13:32:22.794215  882031 kubeadm.go:310] OS: Linux
	I0929 13:32:22.794281  882031 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 13:32:22.794349  882031 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 13:32:22.794402  882031 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 13:32:22.794504  882031 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 13:32:22.794601  882031 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 13:32:22.794672  882031 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 13:32:22.794745  882031 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 13:32:22.794817  882031 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 13:32:22.863987  882031 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 13:32:22.864267  882031 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 13:32:22.864404  882031 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 13:32:22.872153  882031 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0929 13:32:20.022482  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:22.522687  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:22.874555  882031 out.go:252]   - Generating certificates and keys ...
	I0929 13:32:22.874652  882031 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 13:32:22.874734  882031 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 13:32:23.182274  882031 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 13:32:23.349002  882031 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 13:32:23.651210  882031 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 13:32:23.793982  882031 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 13:32:24.085047  882031 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 13:32:24.085255  882031 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-411536 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0929 13:32:24.334658  882031 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 13:32:24.334861  882031 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-411536 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0929 13:32:24.410508  882031 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 13:32:24.459768  882031 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 13:32:24.782531  882031 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 13:32:24.782639  882031 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 13:32:24.997160  882031 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 13:32:25.110731  882031 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 13:32:25.364050  882031 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 13:32:25.637407  882031 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 13:32:25.782331  882031 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 13:32:25.782996  882031 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 13:32:25.787722  882031 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 13:32:25.791546  882031 out.go:252]   - Booting up control plane ...
	I0929 13:32:25.791663  882031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 13:32:25.791760  882031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 13:32:25.791856  882031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 13:32:25.801480  882031 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 13:32:25.801623  882031 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 13:32:25.808132  882031 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 13:32:25.808277  882031 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 13:32:25.808383  882031 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 13:32:25.892315  882031 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 13:32:25.892423  882031 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W0929 13:32:25.022661  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:27.023697  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:29.525221  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:26.893311  882031 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001010427s
	I0929 13:32:26.898312  882031 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 13:32:26.898429  882031 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0929 13:32:26.898626  882031 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 13:32:26.898740  882031 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 13:32:27.620391  882031 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 722.050313ms
	I0929 13:32:28.907949  882031 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.009536795s
	I0929 13:32:30.899515  882031 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001261339s
	I0929 13:32:30.912068  882031 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 13:32:30.924731  882031 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 13:32:30.936416  882031 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 13:32:30.936724  882031 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-411536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 13:32:30.946562  882031 kubeadm.go:310] [bootstrap-token] Using token: as7pfm.ivou000wasuxxc88
	I0929 13:32:30.948177  882031 out.go:252]   - Configuring RBAC rules ...
	I0929 13:32:30.948365  882031 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 13:32:30.953548  882031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 13:32:30.962568  882031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 13:32:30.965764  882031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 13:32:30.969444  882031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 13:32:30.973609  882031 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 13:32:31.307296  882031 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 13:32:31.725815  882031 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 13:32:32.306675  882031 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 13:32:32.307743  882031 kubeadm.go:310] 
	I0929 13:32:32.307848  882031 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 13:32:32.307866  882031 kubeadm.go:310] 
	I0929 13:32:32.307980  882031 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 13:32:32.308000  882031 kubeadm.go:310] 
	I0929 13:32:32.308051  882031 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 13:32:32.308115  882031 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 13:32:32.308181  882031 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 13:32:32.308192  882031 kubeadm.go:310] 
	I0929 13:32:32.308285  882031 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 13:32:32.308307  882031 kubeadm.go:310] 
	I0929 13:32:32.308368  882031 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 13:32:32.308378  882031 kubeadm.go:310] 
	I0929 13:32:32.308447  882031 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 13:32:32.308558  882031 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 13:32:32.308653  882031 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 13:32:32.308674  882031 kubeadm.go:310] 
	I0929 13:32:32.308758  882031 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 13:32:32.308842  882031 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 13:32:32.308856  882031 kubeadm.go:310] 
	I0929 13:32:32.309029  882031 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token as7pfm.ivou000wasuxxc88 \
	I0929 13:32:32.309175  882031 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b \
	I0929 13:32:32.309209  882031 kubeadm.go:310] 	--control-plane 
	I0929 13:32:32.309217  882031 kubeadm.go:310] 
	I0929 13:32:32.309344  882031 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 13:32:32.309356  882031 kubeadm.go:310] 
	I0929 13:32:32.309464  882031 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token as7pfm.ivou000wasuxxc88 \
	I0929 13:32:32.309586  882031 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b 
	I0929 13:32:32.313509  882031 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 13:32:32.313688  882031 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 13:32:32.313725  882031 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0929 13:32:32.315669  882031 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W0929 13:32:31.526204  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:34.024856  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:32.317186  882031 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 13:32:32.317273  882031 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0929 13:32:32.322136  882031 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0929 13:32:32.322172  882031 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0929 13:32:32.354111  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 13:32:32.697464  882031 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 13:32:32.697548  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:32.697582  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-411536 minikube.k8s.io/updated_at=2025_09_29T13_32_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=custom-flannel-411536 minikube.k8s.io/primary=true
	I0929 13:32:32.781455  882031 ops.go:34] apiserver oom_adj: -16
	I0929 13:32:32.781500  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:33.282398  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:33.781564  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:34.281782  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:34.782508  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:35.282618  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:35.782586  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:36.282140  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:36.782153  882031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:32:36.858327  882031 kubeadm.go:1105] duration metric: took 4.160836208s to wait for elevateKubeSystemPrivileges
	I0929 13:32:36.858367  882031 kubeadm.go:394] duration metric: took 14.286041559s to StartCluster
	I0929 13:32:36.858390  882031 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:36.858455  882031 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:32:36.860557  882031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:36.860841  882031 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:32:36.860979  882031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 13:32:36.860942  882031 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:32:36.861103  882031 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-411536"
	I0929 13:32:36.861144  882031 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-411536"
	I0929 13:32:36.861155  882031 config.go:182] Loaded profile config "custom-flannel-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:36.861180  882031 host.go:66] Checking if "custom-flannel-411536" exists ...
	I0929 13:32:36.861105  882031 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-411536"
	I0929 13:32:36.861278  882031 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-411536"
	I0929 13:32:36.861661  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:36.861743  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:36.862486  882031 out.go:179] * Verifying Kubernetes components...
	I0929 13:32:36.864161  882031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:32:36.888217  882031 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:32:36.892018  882031 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:32:36.892046  882031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:32:36.892113  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:36.892499  882031 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-411536"
	I0929 13:32:36.892551  882031 host.go:66] Checking if "custom-flannel-411536" exists ...
	I0929 13:32:36.893098  882031 cli_runner.go:164] Run: docker container inspect custom-flannel-411536 --format={{.State.Status}}
	I0929 13:32:36.922279  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:36.922333  882031 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:32:36.922344  882031 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:32:36.922404  882031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-411536
	I0929 13:32:36.954181  882031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/custom-flannel-411536/id_rsa Username:docker}
	I0929 13:32:36.977391  882031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 13:32:37.016381  882031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:32:37.054798  882031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:32:37.088671  882031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:32:37.217660  882031 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0929 13:32:37.222644  882031 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-411536" to be "Ready" ...
	I0929 13:32:37.404843  882031 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0929 13:32:36.522117  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:39.021703  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:32:37.406043  882031 addons.go:514] duration metric: took 545.112209ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 13:32:37.724442  882031 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-411536" context rescaled to 1 replicas
	W0929 13:32:39.225929  882031 node_ready.go:57] node "custom-flannel-411536" has "Ready":"False" status (will retry)
	W0929 13:32:41.726527  882031 node_ready.go:57] node "custom-flannel-411536" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 29 13:31:24 embed-certs-144376 crio[562]: time="2025-09-29 13:31:24.312696674Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4216b807-a978-458c-b4ee-5e1b9f3e25b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:32 embed-certs-144376 crio[562]: time="2025-09-29 13:31:32.312330264Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=aac69b5e-90a0-4282-812f-dea5fcbc74c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:32 embed-certs-144376 crio[562]: time="2025-09-29 13:31:32.312670639Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=aac69b5e-90a0-4282-812f-dea5fcbc74c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:37 embed-certs-144376 crio[562]: time="2025-09-29 13:31:37.312067295Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c0116f48-21c4-48ce-8d5d-3e039eba4da6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:37 embed-certs-144376 crio[562]: time="2025-09-29 13:31:37.312281115Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c0116f48-21c4-48ce-8d5d-3e039eba4da6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:44 embed-certs-144376 crio[562]: time="2025-09-29 13:31:44.312214764Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=376e4197-e7c1-43eb-be83-d2e1ce752fcd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:44 embed-certs-144376 crio[562]: time="2025-09-29 13:31:44.312552164Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=376e4197-e7c1-43eb-be83-d2e1ce752fcd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:50 embed-certs-144376 crio[562]: time="2025-09-29 13:31:50.313571484Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1922d91e-7858-4a6d-bf8e-fb5e642b5d11 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:50 embed-certs-144376 crio[562]: time="2025-09-29 13:31:50.313784001Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1922d91e-7858-4a6d-bf8e-fb5e642b5d11 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:56 embed-certs-144376 crio[562]: time="2025-09-29 13:31:56.312128206Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a30ec05c-9864-4513-bdda-ae4edbad0b66 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:56 embed-certs-144376 crio[562]: time="2025-09-29 13:31:56.312439272Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a30ec05c-9864-4513-bdda-ae4edbad0b66 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:01 embed-certs-144376 crio[562]: time="2025-09-29 13:32:01.312637783Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ba136626-16b7-4671-9ec2-d620f0cb61f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:01 embed-certs-144376 crio[562]: time="2025-09-29 13:32:01.312878340Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ba136626-16b7-4671-9ec2-d620f0cb61f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:10 embed-certs-144376 crio[562]: time="2025-09-29 13:32:10.312897968Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ba18ee0b-c835-4252-a74b-e2cf118c4f18 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:10 embed-certs-144376 crio[562]: time="2025-09-29 13:32:10.313318156Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ba18ee0b-c835-4252-a74b-e2cf118c4f18 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:14 embed-certs-144376 crio[562]: time="2025-09-29 13:32:14.312802067Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5b2bac58-9253-4494-94e6-bcd77f456b88 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:14 embed-certs-144376 crio[562]: time="2025-09-29 13:32:14.313149366Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5b2bac58-9253-4494-94e6-bcd77f456b88 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:21 embed-certs-144376 crio[562]: time="2025-09-29 13:32:21.312256946Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ef99ca8f-9cff-4e8f-9f9e-bfaccca82855 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:21 embed-certs-144376 crio[562]: time="2025-09-29 13:32:21.312522176Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ef99ca8f-9cff-4e8f-9f9e-bfaccca82855 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:29 embed-certs-144376 crio[562]: time="2025-09-29 13:32:29.312563312Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=15e62faf-3ddb-4423-9f44-5dba4d7fc900 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:29 embed-certs-144376 crio[562]: time="2025-09-29 13:32:29.312780877Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=15e62faf-3ddb-4423-9f44-5dba4d7fc900 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:36 embed-certs-144376 crio[562]: time="2025-09-29 13:32:36.311792011Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d86716c1-59f2-4465-8d0a-403dde7c9762 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:36 embed-certs-144376 crio[562]: time="2025-09-29 13:32:36.312146178Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d86716c1-59f2-4465-8d0a-403dde7c9762 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:43 embed-certs-144376 crio[562]: time="2025-09-29 13:32:43.312720238Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=db05e2eb-7312-4d9e-b698-cf350e6ef8dd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:43 embed-certs-144376 crio[562]: time="2025-09-29 13:32:43.313058570Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=db05e2eb-7312-4d9e-b698-cf350e6ef8dd name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bdcf7662e05c4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   71fa28180ae9f       dashboard-metrics-scraper-6ffb444bf9-swpg7
	20f828febad04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   cc76de805b765       storage-provisioner
	2c28c442a0836       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   9683036d15d13       busybox
	b81151f3f1788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   cc76de805b765       storage-provisioner
	6e8018e1ba402       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   5dbbe42bd9107       coredns-66bc5c9577-vrkvb
	ddf7c93195045       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   771de56399555       kindnet-cs6jd
	64084dd0f47ff       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   36ff22bd74db6       kube-proxy-bdkrl
	fc40bfd0b66a2       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   ff4b4c5fab795       kube-controller-manager-embed-certs-144376
	7292cb10a6712       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   49f9784a1f205       kube-scheduler-embed-certs-144376
	1598cd93517dd       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   dcee044811ca2       kube-apiserver-embed-certs-144376
	7d31b585aa936       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   a25d91e868943       etcd-embed-certs-144376
	
	
	==> coredns [6e8018e1ba402bbd1d336a9cd3a379b09dd4678592e47cdd2d79211c76d02da8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39425 - 47726 "HINFO IN 3466498718447411044.2783620433881952790. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10767793s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-144376
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-144376
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-144376
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-144376
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:32:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:28:10 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:28:10 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:28:10 +0000   Mon, 29 Sep 2025 13:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:28:10 +0000   Mon, 29 Sep 2025 13:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-144376
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7dea206b7bf44d46a0d219c98d3402a3
	  System UUID:                620c5672-8e57-43c3-9cff-b9f1422658b4
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-vrkvb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	  kube-system                 etcd-embed-certs-144376                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-cs6jd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-embed-certs-144376             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-144376    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-bdkrl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-144376             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-8wkwn               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-swpg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zmzj7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node embed-certs-144376 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-144376 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node embed-certs-144376 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-144376 event: Registered Node embed-certs-144376 in Controller
	  Normal  NodeReady                19m                kubelet          Node embed-certs-144376 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-144376 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-144376 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-144376 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-144376 event: Registered Node embed-certs-144376 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [7d31b585aa936e5b5f19f942cd8dd7597ad140998930c0f2f49c079b6d39d776] <==
	{"level":"warn","ts":"2025-09-29T13:14:02.388342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.396950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.405561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.413795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.422518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.431198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.440509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.449535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.457794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.466729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.474823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.485391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.492855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.500057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:02.554235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:24:01.937847Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":997}
	{"level":"info","ts":"2025-09-29T13:24:01.944663Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":997,"took":"6.491331ms","hash":3148437601,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3203072,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-29T13:24:01.944708Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3148437601,"revision":997,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:29:01.944063Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1276}
	{"level":"info","ts":"2025-09-29T13:29:01.947309Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1276,"took":"2.858859ms","hash":3727196298,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1826816,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T13:29:01.947355Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3727196298,"revision":1276,"compact-revision":997}
	{"level":"warn","ts":"2025-09-29T13:29:40.751710Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.306778ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596038406163336 > lease_revoke:<id:06ed99959bd1cb1d>","response":"size:28"}
	{"level":"info","ts":"2025-09-29T13:29:41.853874Z","caller":"traceutil/trace.go:172","msg":"trace[153498480] transaction","detail":"{read_only:false; response_revision:1568; number_of_response:1; }","duration":"109.775559ms","start":"2025-09-29T13:29:41.744067Z","end":"2025-09-29T13:29:41.853843Z","steps":["trace[153498480] 'process raft request'  (duration: 109.633244ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:30:30.242006Z","caller":"traceutil/trace.go:172","msg":"trace[56430879] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"116.317765ms","start":"2025-09-29T13:30:30.125667Z","end":"2025-09-29T13:30:30.241984Z","steps":["trace[56430879] 'process raft request'  (duration: 116.16221ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:30:30.749400Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.927875ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596038406163655 > lease_revoke:<id:06ed99959bd1cc5d>","response":"size:28"}
	
	
	==> kernel <==
	 13:32:44 up  3:15,  0 users,  load average: 1.82, 1.32, 1.35
	Linux embed-certs-144376 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ddf7c931950453b8415673fba84207479f2d7842e988e0588478d28906379b07] <==
	I0929 13:30:44.239078       1 main.go:301] handling current node
	I0929 13:30:54.231129       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:30:54.231186       1 main.go:301] handling current node
	I0929 13:31:04.235008       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:04.235044       1 main.go:301] handling current node
	I0929 13:31:14.232006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:14.232049       1 main.go:301] handling current node
	I0929 13:31:24.239986       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:24.240023       1 main.go:301] handling current node
	I0929 13:31:34.232967       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:34.233004       1 main.go:301] handling current node
	I0929 13:31:44.230979       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:44.231025       1 main.go:301] handling current node
	I0929 13:31:54.233679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:31:54.233723       1 main.go:301] handling current node
	I0929 13:32:04.232044       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:32:04.232249       1 main.go:301] handling current node
	I0929 13:32:14.238600       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:32:14.238651       1 main.go:301] handling current node
	I0929 13:32:24.239519       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:32:24.239555       1 main.go:301] handling current node
	I0929 13:32:34.232391       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:32:34.232443       1 main.go:301] handling current node
	I0929 13:32:44.238984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:32:44.239053       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1598cd93517dd22b6e988bd9bf309975c6618919d8b76695d9a395e2d0bbb04c] <==
	I0929 13:29:04.074330       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:29:31.023727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:30:04.074262       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:04.074326       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:30:04.074347       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:30:04.074474       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:04.074553       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:30:04.075383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:30:31.130107       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:30:51.761679       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:31:32.814974       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:32:04.074750       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:32:04.074816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:32:04.074832       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:32:04.076006       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:32:04.076160       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:32:04.076187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:32:11.502385       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:32:42.943474       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [fc40bfd0b66a2683e92b69459409e9f07839d9e5eface8f1106d2b80951c1b80] <==
	I0929 13:26:37.631563       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:07.544776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:07.638560       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:37.549070       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:37.645798       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:07.554017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:07.653494       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:37.559502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:37.663167       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:07.564616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:07.671578       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:37.570612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:37.680488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:07.575840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:07.690311       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:37.580920       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:37.700150       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:07.587172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:07.708680       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:37.592344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:37.715856       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:32:07.597482       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:32:07.724928       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:32:37.601930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:32:37.732261       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [64084dd0f47ff8074a122fb5e82e870a23b3dc3c07700e3bd18b887c37e590cd] <==
	I0929 13:14:03.884968       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:14:03.963039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:14:04.063458       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:14:04.063523       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 13:14:04.063656       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:14:04.089169       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:14:04.089240       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:14:04.095842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:14:04.096425       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:14:04.096465       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:04.098468       1 config.go:200] "Starting service config controller"
	I0929 13:14:04.098491       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:14:04.098518       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:14:04.098524       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:14:04.098539       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:14:04.098543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:14:04.099629       1 config.go:309] "Starting node config controller"
	I0929 13:14:04.099652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:14:04.099660       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:14:04.198690       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:14:04.198717       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:14:04.198722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7292cb10a67121f433d0bde2a2c955806dc4f4fd8f6d44d1b72039a3de28e08a] <==
	I0929 13:14:01.869165       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:14:03.011421       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:14:03.011458       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:14:03.011469       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:14:03.011480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:14:03.059117       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:14:03.059148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:03.061329       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:03.061381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:03.061778       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:14:03.061809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:14:03.162004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:31:55 embed-certs-144376 kubelet[710]: E0929 13:31:55.311923     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:31:56 embed-certs-144376 kubelet[710]: E0929 13:31:56.312752     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:32:00 embed-certs-144376 kubelet[710]: E0929 13:32:00.472743     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152720472462644  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:00 embed-certs-144376 kubelet[710]: E0929 13:32:00.472792     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152720472462644  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:01 embed-certs-144376 kubelet[710]: E0929 13:32:01.313299     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:32:06 embed-certs-144376 kubelet[710]: I0929 13:32:06.311492     710 scope.go:117] "RemoveContainer" containerID="bdcf7662e05c49a5d09e87535021e5be78f318cf6218521f137e0e197cfa6a97"
	Sep 29 13:32:06 embed-certs-144376 kubelet[710]: E0929 13:32:06.311801     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:32:10 embed-certs-144376 kubelet[710]: E0929 13:32:10.313628     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:32:10 embed-certs-144376 kubelet[710]: E0929 13:32:10.474495     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152730474181504  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:10 embed-certs-144376 kubelet[710]: E0929 13:32:10.474549     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152730474181504  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:14 embed-certs-144376 kubelet[710]: E0929 13:32:14.313486     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:32:20 embed-certs-144376 kubelet[710]: E0929 13:32:20.475768     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152740475545764  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:20 embed-certs-144376 kubelet[710]: E0929 13:32:20.475810     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152740475545764  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:21 embed-certs-144376 kubelet[710]: I0929 13:32:21.311650     710 scope.go:117] "RemoveContainer" containerID="bdcf7662e05c49a5d09e87535021e5be78f318cf6218521f137e0e197cfa6a97"
	Sep 29 13:32:21 embed-certs-144376 kubelet[710]: E0929 13:32:21.311915     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:32:21 embed-certs-144376 kubelet[710]: E0929 13:32:21.312941     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:32:29 embed-certs-144376 kubelet[710]: E0929 13:32:29.313149     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	Sep 29 13:32:30 embed-certs-144376 kubelet[710]: E0929 13:32:30.477488     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152750477224803  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:30 embed-certs-144376 kubelet[710]: E0929 13:32:30.477534     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152750477224803  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:36 embed-certs-144376 kubelet[710]: I0929 13:32:36.311300     710 scope.go:117] "RemoveContainer" containerID="bdcf7662e05c49a5d09e87535021e5be78f318cf6218521f137e0e197cfa6a97"
	Sep 29 13:32:36 embed-certs-144376 kubelet[710]: E0929 13:32:36.311546     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-swpg7_kubernetes-dashboard(1d8e6337-107a-4fb8-bb3c-99b372908964)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-swpg7" podUID="1d8e6337-107a-4fb8-bb3c-99b372908964"
	Sep 29 13:32:36 embed-certs-144376 kubelet[710]: E0929 13:32:36.312490     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zmzj7" podUID="3d7707ff-be06-433e-a8ea-a5478e606f81"
	Sep 29 13:32:40 embed-certs-144376 kubelet[710]: E0929 13:32:40.479606     710 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152760479299098  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:40 embed-certs-144376 kubelet[710]: E0929 13:32:40.479648     710 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152760479299098  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:43 embed-certs-144376 kubelet[710]: E0929 13:32:43.313516     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8wkwn" podUID="d0a89b58-3205-44cb-af7d-6e7a36bf99bf"
	
	
	==> storage-provisioner [20f828febad049e885af5b33e66f01607bc06a14adebea310f5c13dcae86ffa0] <==
	W0929 13:32:18.869155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:20.872841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:20.877441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:22.880917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:22.886720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:24.890368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:24.895013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:26.899287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:26.905269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:28.908444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:28.912811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:30.916518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:30.921088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:32.924445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:32.930268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:34.933551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:34.937938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:36.943475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:36.951435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:38.955371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:38.959928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:40.964142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:40.970156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:42.974915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:42.981815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b81151f3f178816a8153b88c2d79acae49eec4dda7952abb12ac6c961be4e6b7] <==
	I0929 13:14:03.875110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:14:33.879306       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-144376 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7: exit status 1 (66.985211ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8wkwn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zmzj7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-144376 describe pod metrics-server-746fcd58dc-8wkwn kubernetes-dashboard-855c9754f9-zmzj7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gmnqw" [a16fafc6-e94a-47ed-8838-4df0ecd6eb6c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:24:09.424549  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:15.385507  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:29:09.425203  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:32:51.582865066 +0000 UTC m=+4041.882901537
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe po kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-504443 describe po kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-gmnqw
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-504443/192.168.76.2
Start Time:       Mon, 29 Sep 2025 13:14:16 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fqmq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-7fqmq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw to default-k8s-diff-port-504443
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m30s (x48 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m52s (x51 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard: exit status 1 (84.773265ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-gmnqw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-504443 logs kubernetes-dashboard-855c9754f9-gmnqw -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-504443
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-504443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83",
	        "Created": "2025-09-29T13:12:58.237146464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 839701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:14:02.102317201Z",
	            "FinishedAt": "2025-09-29T13:14:01.114928788Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/hosts",
	        "LogPath": "/var/lib/docker/containers/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83/ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83-json.log",
	        "Name": "/default-k8s-diff-port-504443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-504443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-504443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ec073290678c0427573ed2f0c694a2c2e7c6a5b736d6407e80a4519960d7fa83",
	                "LowerDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7-init/diff:/var/lib/docker/overlay2/5cb83ec56c1be161928cc8bc4f279885a6a4b22967be0ce1007f0f003cec5a66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3fbe423389f64876f4e9333fa2b3b4a25c2b1f7bf1c6543afe9d95fcfc95a5a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-504443",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-504443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-504443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-504443",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-504443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f72f4f2a4951fd873e69965deb29d5776ecf83fad8d2032cc4a76e80e521b67",
	            "SandboxKey": "/var/run/docker/netns/8f72f4f2a495",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-504443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:be:76:8c:f8:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f5b4e4a14093b2a56f28b72dc27e49b82a8eb021b4f2e4b7640eb093e58224e4",
	                    "EndpointID": "5f2fef026ebd7b095b3ab2eed3068663a57fe40b044e5215cf3316724d92ba61",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-504443",
	                        "ec073290678c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504443 logs -n 25: (1.452112501s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-411536 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo docker system info                                                                                                                          │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cri-dockerd --version                                                                                                                       │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo containerd config dump                                                                                                                      │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ ssh     │ -p kindnet-411536 sudo crio config                                                                                                                                 │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ delete  │ -p kindnet-411536                                                                                                                                                  │ kindnet-411536            │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ start   │ -p custom-flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-411536     │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	│ image   │ embed-certs-144376 image list --format=json                                                                                                                        │ embed-certs-144376        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ pause   │ -p embed-certs-144376 --alsologtostderr -v=1                                                                                                                       │ embed-certs-144376        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ unpause │ -p embed-certs-144376 --alsologtostderr -v=1                                                                                                                       │ embed-certs-144376        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ delete  │ -p embed-certs-144376                                                                                                                                              │ embed-certs-144376        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ delete  │ -p embed-certs-144376                                                                                                                                              │ embed-certs-144376        │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │ 29 Sep 25 13:32 UTC │
	│ start   │ -p enable-default-cni-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio    │ enable-default-cni-411536 │ jenkins │ v1.37.0 │ 29 Sep 25 13:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:32:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:32:51.352457  887693 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:32:51.352735  887693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:32:51.352746  887693 out.go:374] Setting ErrFile to fd 2...
	I0929 13:32:51.352750  887693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:32:51.353017  887693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:32:51.353735  887693 out.go:368] Setting JSON to false
	I0929 13:32:51.355380  887693 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11716,"bootTime":1759141055,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:32:51.355489  887693 start.go:140] virtualization: kvm guest
	I0929 13:32:51.357688  887693 out.go:179] * [enable-default-cni-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:32:51.359299  887693 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:32:51.359302  887693 notify.go:220] Checking for updates...
	I0929 13:32:51.361930  887693 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:32:51.363378  887693 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:32:51.364841  887693 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:32:51.366222  887693 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:32:51.367527  887693 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:32:51.369603  887693 config.go:182] Loaded profile config "calico-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:51.369706  887693 config.go:182] Loaded profile config "custom-flannel-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:51.369783  887693 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:32:51.369931  887693 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:32:51.399662  887693 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:32:51.399839  887693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:32:51.461069  887693 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:32:51.448174052 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:32:51.461178  887693 docker.go:318] overlay module found
	I0929 13:32:51.463475  887693 out.go:179] * Using the docker driver based on user configuration
	I0929 13:32:51.465016  887693 start.go:304] selected driver: docker
	I0929 13:32:51.465039  887693 start.go:924] validating driver "docker" against <nil>
	I0929 13:32:51.465058  887693 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:32:51.465781  887693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:32:51.536944  887693 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:32:51.525030364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:32:51.537184  887693 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E0929 13:32:51.537422  887693 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0929 13:32:51.537461  887693 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:32:51.539561  887693 out.go:179] * Using Docker driver with root privileges
	I0929 13:32:51.541110  887693 cni.go:84] Creating CNI manager for "bridge"
	I0929 13:32:51.541135  887693 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 13:32:51.541248  887693 start.go:348] cluster config:
	{Name:enable-default-cni-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:32:51.542996  887693 out.go:179] * Starting "enable-default-cni-411536" primary control-plane node in "enable-default-cni-411536" cluster
	I0929 13:32:51.544411  887693 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:32:51.546001  887693 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:32:51.547452  887693 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:32:51.547518  887693 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:32:51.547544  887693 cache.go:58] Caching tarball of preloaded images
	I0929 13:32:51.547576  887693 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:32:51.547669  887693 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:32:51.547687  887693 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:32:51.547840  887693 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/enable-default-cni-411536/config.json ...
	I0929 13:32:51.547896  887693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/enable-default-cni-411536/config.json: {Name:mkedfddb156f7f13193e9bcf793f19182d68be8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:32:51.573246  887693 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:32:51.573270  887693 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:32:51.573291  887693 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:32:51.573322  887693 start.go:360] acquireMachinesLock for enable-default-cni-411536: {Name:mk823b92a094c7a2a06db2e5d02d6445990aa90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:32:51.573437  887693 start.go:364] duration metric: took 94.033µs to acquireMachinesLock for "enable-default-cni-411536"
	I0929 13:32:51.573468  887693 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-411536 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:32:51.573569  887693 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:32:46.897007  882031 system_pods.go:86] 7 kube-system pods found
	I0929 13:32:46.897046  882031 system_pods.go:89] "coredns-66bc5c9577-vpdbw" [24da5098-2c2f-4478-859a-aece601a00b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:32:46.897053  882031 system_pods.go:89] "etcd-custom-flannel-411536" [e88b76c3-38cd-40d7-9b34-2105727276f0] Running
	I0929 13:32:46.897060  882031 system_pods.go:89] "kube-apiserver-custom-flannel-411536" [bf7fd20b-eacc-4f8c-b33b-f9c491bcd0ae] Running
	I0929 13:32:46.897064  882031 system_pods.go:89] "kube-controller-manager-custom-flannel-411536" [f5f38f1a-c558-4f3e-aa01-a5d20ea7ac53] Running
	I0929 13:32:46.897068  882031 system_pods.go:89] "kube-proxy-jfwwb" [d7a25553-72c3-4f9c-a08c-7a1c6fe371fe] Running
	I0929 13:32:46.897073  882031 system_pods.go:89] "kube-scheduler-custom-flannel-411536" [30f04cae-c60a-4b9a-a6d5-92d595a2c9fc] Running
	I0929 13:32:46.897078  882031 system_pods.go:89] "storage-provisioner" [b4321651-3232-4cb1-8870-9296a279e895] Running
	I0929 13:32:46.897095  882031 retry.go:31] will retry after 916.898332ms: missing components: kube-dns
	I0929 13:32:47.820312  882031 system_pods.go:86] 7 kube-system pods found
	I0929 13:32:47.820360  882031 system_pods.go:89] "coredns-66bc5c9577-vpdbw" [24da5098-2c2f-4478-859a-aece601a00b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:32:47.820371  882031 system_pods.go:89] "etcd-custom-flannel-411536" [e88b76c3-38cd-40d7-9b34-2105727276f0] Running
	I0929 13:32:47.820383  882031 system_pods.go:89] "kube-apiserver-custom-flannel-411536" [bf7fd20b-eacc-4f8c-b33b-f9c491bcd0ae] Running
	I0929 13:32:47.820391  882031 system_pods.go:89] "kube-controller-manager-custom-flannel-411536" [f5f38f1a-c558-4f3e-aa01-a5d20ea7ac53] Running
	I0929 13:32:47.820397  882031 system_pods.go:89] "kube-proxy-jfwwb" [d7a25553-72c3-4f9c-a08c-7a1c6fe371fe] Running
	I0929 13:32:47.820404  882031 system_pods.go:89] "kube-scheduler-custom-flannel-411536" [30f04cae-c60a-4b9a-a6d5-92d595a2c9fc] Running
	I0929 13:32:47.820409  882031 system_pods.go:89] "storage-provisioner" [b4321651-3232-4cb1-8870-9296a279e895] Running
	I0929 13:32:47.820431  882031 retry.go:31] will retry after 952.504426ms: missing components: kube-dns
	I0929 13:32:48.777784  882031 system_pods.go:86] 7 kube-system pods found
	I0929 13:32:48.777829  882031 system_pods.go:89] "coredns-66bc5c9577-vpdbw" [24da5098-2c2f-4478-859a-aece601a00b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:32:48.777838  882031 system_pods.go:89] "etcd-custom-flannel-411536" [e88b76c3-38cd-40d7-9b34-2105727276f0] Running
	I0929 13:32:48.777848  882031 system_pods.go:89] "kube-apiserver-custom-flannel-411536" [bf7fd20b-eacc-4f8c-b33b-f9c491bcd0ae] Running
	I0929 13:32:48.777855  882031 system_pods.go:89] "kube-controller-manager-custom-flannel-411536" [f5f38f1a-c558-4f3e-aa01-a5d20ea7ac53] Running
	I0929 13:32:48.777874  882031 system_pods.go:89] "kube-proxy-jfwwb" [d7a25553-72c3-4f9c-a08c-7a1c6fe371fe] Running
	I0929 13:32:48.777895  882031 system_pods.go:89] "kube-scheduler-custom-flannel-411536" [30f04cae-c60a-4b9a-a6d5-92d595a2c9fc] Running
	I0929 13:32:48.777900  882031 system_pods.go:89] "storage-provisioner" [b4321651-3232-4cb1-8870-9296a279e895] Running
	I0929 13:32:48.777922  882031 retry.go:31] will retry after 1.087308243s: missing components: kube-dns
	I0929 13:32:49.869717  882031 system_pods.go:86] 7 kube-system pods found
	I0929 13:32:49.869750  882031 system_pods.go:89] "coredns-66bc5c9577-vpdbw" [24da5098-2c2f-4478-859a-aece601a00b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:32:49.869756  882031 system_pods.go:89] "etcd-custom-flannel-411536" [e88b76c3-38cd-40d7-9b34-2105727276f0] Running
	I0929 13:32:49.869763  882031 system_pods.go:89] "kube-apiserver-custom-flannel-411536" [bf7fd20b-eacc-4f8c-b33b-f9c491bcd0ae] Running
	I0929 13:32:49.869767  882031 system_pods.go:89] "kube-controller-manager-custom-flannel-411536" [f5f38f1a-c558-4f3e-aa01-a5d20ea7ac53] Running
	I0929 13:32:49.869770  882031 system_pods.go:89] "kube-proxy-jfwwb" [d7a25553-72c3-4f9c-a08c-7a1c6fe371fe] Running
	I0929 13:32:49.869773  882031 system_pods.go:89] "kube-scheduler-custom-flannel-411536" [30f04cae-c60a-4b9a-a6d5-92d595a2c9fc] Running
	I0929 13:32:49.869776  882031 system_pods.go:89] "storage-provisioner" [b4321651-3232-4cb1-8870-9296a279e895] Running
	I0929 13:32:49.869794  882031 retry.go:31] will retry after 1.60210362s: missing components: kube-dns
	I0929 13:32:51.477809  882031 system_pods.go:86] 7 kube-system pods found
	I0929 13:32:51.477852  882031 system_pods.go:89] "coredns-66bc5c9577-vpdbw" [24da5098-2c2f-4478-859a-aece601a00b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:32:51.477860  882031 system_pods.go:89] "etcd-custom-flannel-411536" [e88b76c3-38cd-40d7-9b34-2105727276f0] Running
	I0929 13:32:51.477867  882031 system_pods.go:89] "kube-apiserver-custom-flannel-411536" [bf7fd20b-eacc-4f8c-b33b-f9c491bcd0ae] Running
	I0929 13:32:51.477871  882031 system_pods.go:89] "kube-controller-manager-custom-flannel-411536" [f5f38f1a-c558-4f3e-aa01-a5d20ea7ac53] Running
	I0929 13:32:51.477875  882031 system_pods.go:89] "kube-proxy-jfwwb" [d7a25553-72c3-4f9c-a08c-7a1c6fe371fe] Running
	I0929 13:32:51.477878  882031 system_pods.go:89] "kube-scheduler-custom-flannel-411536" [30f04cae-c60a-4b9a-a6d5-92d595a2c9fc] Running
	I0929 13:32:51.477905  882031 system_pods.go:89] "storage-provisioner" [b4321651-3232-4cb1-8870-9296a279e895] Running
	I0929 13:32:51.477932  882031 retry.go:31] will retry after 1.457807189s: missing components: kube-dns
	
	
	==> CRI-O <==
	Sep 29 13:31:28 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:28.908077656Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=936e6ac3-3372-4141-8eea-3335ca0d1fcb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:34 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:34.909116080Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=df5c303f-5671-4008-8776-a74af9825c22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:34 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:34.909423483Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=df5c303f-5671-4008-8776-a74af9825c22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:41.907803487Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=23733d2a-33cf-49cb-94fc-750d5fd214a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:41 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:41.908114294Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=23733d2a-33cf-49cb-94fc-750d5fd214a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:48 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:48.908760459Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b6010f8e-9879-4c01-994d-16ee94485bb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:48 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:48.909128125Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b6010f8e-9879-4c01-994d-16ee94485bb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:53 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:53.907938473Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=48cca2bb-4bd3-400c-bc09-a7b5209a096a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:31:53 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:31:53.908208435Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=48cca2bb-4bd3-400c-bc09-a7b5209a096a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:03 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:03.907458698Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=936bf698-1bd8-47f8-93b6-b22062afc68e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:03 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:03.907762401Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=936bf698-1bd8-47f8-93b6-b22062afc68e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:04 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:04.907575555Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b23d1e15-2417-4fa0-87c9-64d8dc371e90 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:04 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:04.907934237Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b23d1e15-2417-4fa0-87c9-64d8dc371e90 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:14 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:14.908288865Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=850f63b7-7f0d-47c6-a1a5-b9debb607361 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:14 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:14.908579578Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=850f63b7-7f0d-47c6-a1a5-b9debb607361 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:18 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:18.908072561Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0a929dab-34aa-4e77-8cfd-dd77af8ef0b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:18 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:18.908427256Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0a929dab-34aa-4e77-8cfd-dd77af8ef0b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:29 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:29.907145980Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e5001e52-77af-4694-9f72-b6c261b330de name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:29 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:29.907497860Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e5001e52-77af-4694-9f72-b6c261b330de name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:32 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:32.907590371Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b2c413d5-052f-4736-95ec-159cb330cbcd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:32 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:32.908102618Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b2c413d5-052f-4736-95ec-159cb330cbcd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:43 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:43.908127105Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5b0f9016-7eec-4023-b0db-3330dcf59b33 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:43 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:43.908374517Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5b0f9016-7eec-4023-b0db-3330dcf59b33 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:46 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:46.907464598Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a02c119e-de97-42be-9568-95e1dbad12c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 13:32:46 default-k8s-diff-port-504443 crio[561]: time="2025-09-29 13:32:46.907801228Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a02c119e-de97-42be-9568-95e1dbad12c6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e3049fbe59a75       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   7a5a8a3f04b80       dashboard-metrics-scraper-6ffb444bf9-47kpl
	e932e508fe0aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   b878a4c0eee8e       storage-provisioner
	f4f260ee133fa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   4d065629e4bd1       coredns-66bc5c9577-prpff
	9a94851ef231f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   76a90f384388d       kindnet-fb5jq
	f50ff8e61753e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   62f8489fca403       busybox
	73711de9fb93e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   f5834108d3965       kube-proxy-vcsfr
	c9099c6e53076       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   b878a4c0eee8e       storage-provisioner
	45aac201e654c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   2d9df557a8345       kube-apiserver-default-k8s-diff-port-504443
	869cb9c9ee595       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   8c2e3b881d82c       etcd-default-k8s-diff-port-504443
	38c52fbbfcf31       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   1ab57bf894ea6       kube-controller-manager-default-k8s-diff-port-504443
	11ae39a5a4b2a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   223b1fd348502       kube-scheduler-default-k8s-diff-port-504443
	
	
	==> coredns [f4f260ee133fa2a71e1bed3ffaa90ed10104a38b223337e4dabea66e6e6a15da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52679 - 49434 "HINFO IN 1943250935440787998.4878101473045877455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.16263863s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-504443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-504443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_13_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:13:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:32:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:30:51 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:30:51 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:30:51 +0000   Mon, 29 Sep 2025 13:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:30:51 +0000   Mon, 29 Sep 2025 13:13:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-504443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4dd16f3f1464516aeff8dc64d8f97e7
	  System UUID:                9ce7ec70-e159-4f57-aefc-7e470dc6dd77
	  Boot ID:                    fabba884-bc1a-473f-b978-af61a6e1dfba
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-prpff                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-504443                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-fb5jq                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504443             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504443    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-vcsfr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504443             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-l5t2q                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-47kpl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gmnqw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-504443 event: Registered Node default-k8s-diff-port-504443 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-504443 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-504443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-504443 event: Registered Node default-k8s-diff-port-504443 in Controller
	
	
	==> dmesg <==
	[Sep29 12:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.021401] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023935] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +2.047781] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +4.031718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[  +8.383317] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[ +16.383392] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	[Sep29 12:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 0e b0 24 f1 4b 16 9b a7 cb 49 1c 08 00
	
	
	==> etcd [869cb9c9ee5959b76e080f7c95693a4d8a3d124e77e6b95e8b1de7a394883932] <==
	{"level":"warn","ts":"2025-09-29T13:14:11.408850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.418777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.427562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.436747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.444971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.454080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.462610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.470874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.479347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.487265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.496167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.505577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.514288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.526467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.535278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.545252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:14:11.602552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:24:11.012111Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1024}
	{"level":"info","ts":"2025-09-29T13:24:11.034505Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1024,"took":"22.008984ms","hash":154287375,"current-db-size-bytes":3190784,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1323008,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T13:24:11.034598Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":154287375,"revision":1024,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:29:11.017147Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1307}
	{"level":"info","ts":"2025-09-29T13:29:11.020473Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1307,"took":"2.987444ms","hash":2350861093,"current-db-size-bytes":3190784,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1896448,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T13:29:11.020605Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2350861093,"revision":1307,"compact-revision":1024}
	{"level":"info","ts":"2025-09-29T13:29:41.469647Z","caller":"traceutil/trace.go:172","msg":"trace[780412587] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"156.037573ms","start":"2025-09-29T13:29:41.313582Z","end":"2025-09-29T13:29:41.469619Z","steps":["trace[780412587] 'process raft request'  (duration: 155.904135ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:30:31.573234Z","caller":"traceutil/trace.go:172","msg":"trace[1407021724] transaction","detail":"{read_only:false; response_revision:1636; number_of_response:1; }","duration":"120.598444ms","start":"2025-09-29T13:30:31.452614Z","end":"2025-09-29T13:30:31.573213Z","steps":["trace[1407021724] 'process raft request'  (duration: 120.456495ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:32:53 up  3:15,  0 users,  load average: 1.85, 1.35, 1.36
	Linux default-k8s-diff-port-504443 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9a94851ef231f0cc0fe0d8707d2239b0aeb90d0223808bf4cd37f09acd0a7412] <==
	I0929 13:30:43.788756       1 main.go:301] handling current node
	I0929 13:30:53.784038       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:30:53.784078       1 main.go:301] handling current node
	I0929 13:31:03.793054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:03.793103       1 main.go:301] handling current node
	I0929 13:31:13.788101       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:13.788142       1 main.go:301] handling current node
	I0929 13:31:23.785007       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:23.785046       1 main.go:301] handling current node
	I0929 13:31:33.789984       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:33.790015       1 main.go:301] handling current node
	I0929 13:31:43.787966       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:43.788019       1 main.go:301] handling current node
	I0929 13:31:53.789184       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:53.789234       1 main.go:301] handling current node
	I0929 13:32:03.792026       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:03.792070       1 main.go:301] handling current node
	I0929 13:32:13.788099       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:13.788131       1 main.go:301] handling current node
	I0929 13:32:23.789596       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:23.789636       1 main.go:301] handling current node
	I0929 13:32:33.790179       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:33.790211       1 main.go:301] handling current node
	I0929 13:32:43.787024       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:43.787069       1 main.go:301] handling current node
	
	
	==> kube-apiserver [45aac201e654c63a49fceb57713f628b773c234f55e702e4a52d6f4f144e56f3] <==
	I0929 13:29:13.088075       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:29:39.776995       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:29:41.802455       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:30:13.087590       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:13.087647       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:30:13.087663       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:30:13.088717       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:13.088789       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:30:13.088802       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:31:06.020392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:31:07.531709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:32:13.088840       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:32:13.088914       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:32:13.088934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:32:13.088952       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:32:13.089059       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:32:13.090956       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:32:24.460903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:32:28.981445       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [38c52fbbfcf3188086b7e7244f30aa5b16d04ee26967b32c2df673b9908a9ff6] <==
	I0929 13:26:46.696826       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:16.598348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:16.703832       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:46.603273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:46.711752       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:16.608803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:16.718783       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:46.613739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:46.726412       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:16.619126       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:16.738460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:46.624000       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:46.752911       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:16.630709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:16.769893       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:46.641125       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:46.777911       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:16.646158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:16.786490       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:46.651177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:46.793526       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:32:16.656324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:32:16.801328       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:32:46.663897       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:32:46.808938       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [73711de9fb93eec8c4588fd6c3c3d3bc4494b223a56e01759b33f0558db5c7bf] <==
	I0929 13:14:13.442419       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:14:13.509294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:14:13.609833       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:14:13.609907       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:14:13.610059       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:14:13.630147       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:14:13.630210       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:14:13.635749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:14:13.636263       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:14:13.636309       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:13.637605       1 config.go:309] "Starting node config controller"
	I0929 13:14:13.637628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:14:13.637724       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:14:13.637744       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:14:13.637838       1 config.go:200] "Starting service config controller"
	I0929 13:14:13.637857       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:14:13.637861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:14:13.637866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:14:13.738534       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:14:13.738547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:14:13.738579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:14:13.738666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [11ae39a5a4b2aa54de1a58fcc1500a804983a7f75c9d9041bfac4248aebd4626] <==
	I0929 13:14:10.414585       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:14:12.057382       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:14:12.057542       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:14:12.057559       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:14:12.057569       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:14:12.088946       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:14:12.089003       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:14:12.095383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:12.095425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:14:12.107335       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:14:12.107406       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:14:12.196246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:32:04 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:04.908355     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:32:09 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:09.060294     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152729059985475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:09 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:09.060340     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152729059985475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:12 default-k8s-diff-port-504443 kubelet[708]: I0929 13:32:12.907292     708 scope.go:117] "RemoveContainer" containerID="e3049fbe59a755ad06065d5b9c0581b448f3e0b09ce829580cf80c665b2eb48d"
	Sep 29 13:32:12 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:12.908093     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:32:14 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:14.909201     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:32:18 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:18.908736     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:32:19 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:19.062029     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152739061633372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:19 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:19.062069     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152739061633372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:23 default-k8s-diff-port-504443 kubelet[708]: I0929 13:32:23.907037     708 scope.go:117] "RemoveContainer" containerID="e3049fbe59a755ad06065d5b9c0581b448f3e0b09ce829580cf80c665b2eb48d"
	Sep 29 13:32:23 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:23.907343     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:32:29 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:29.063764     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152749063528876  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:29 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:29.063798     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152749063528876  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:29 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:29.907941     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:32:32 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:32.908481     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:32:37 default-k8s-diff-port-504443 kubelet[708]: I0929 13:32:37.907432     708 scope.go:117] "RemoveContainer" containerID="e3049fbe59a755ad06065d5b9c0581b448f3e0b09ce829580cf80c665b2eb48d"
	Sep 29 13:32:37 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:37.907632     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	Sep 29 13:32:39 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:39.065383     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152759065116754  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:39 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:39.065424     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152759065116754  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:43 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:43.908744     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-l5t2q" podUID="618425bc-036b-42f0-9fdf-4e7744bdd84d"
	Sep 29 13:32:46 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:46.908182     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gmnqw" podUID="a16fafc6-e94a-47ed-8838-4df0ecd6eb6c"
	Sep 29 13:32:49 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:49.066825     708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759152769066534833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:49 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:49.066871     708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759152769066534833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 13:32:52 default-k8s-diff-port-504443 kubelet[708]: I0929 13:32:52.907802     708 scope.go:117] "RemoveContainer" containerID="e3049fbe59a755ad06065d5b9c0581b448f3e0b09ce829580cf80c665b2eb48d"
	Sep 29 13:32:52 default-k8s-diff-port-504443 kubelet[708]: E0929 13:32:52.908063     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-47kpl_kubernetes-dashboard(7b6c5970-c1ec-4987-9efd-33ffbc8b08dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-47kpl" podUID="7b6c5970-c1ec-4987-9efd-33ffbc8b08dd"
	
	
	==> storage-provisioner [c9099c6e5307691f3116db853b92b66c3949faab2309ad5b82cb0af51459bb7a] <==
	I0929 13:14:13.373654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:14:43.376401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e932e508fe0aade1ac939aa0cbd00a696fb0e4e4be0f66e113009c58e45036c4] <==
	W0929 13:32:28.518860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:30.523395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:30.528247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:32.531484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:32.539352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:34.542469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:34.546939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:36.550836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:36.556680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:38.560373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:38.564535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:40.568401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:40.572820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:42.576836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:42.583860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:44.587581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:44.592858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:46.596743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:46.601649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:48.605572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:48.610244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:50.614039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:50.627803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:52.631744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:52.637705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw: exit status 1 (71.051779ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-l5t2q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gmnqw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-504443 describe pod metrics-server-746fcd58dc-l5t2q kubernetes-dashboard-855c9754f9-gmnqw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (925.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0929 13:31:09.264637  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:31:28.563497  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m25.763017895s)

                                                
                                                
-- stdout --
	* [calico-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-411536" primary control-plane node in "calico-411536" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:31:04.819265  874044 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:31:04.819527  874044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:31:04.819537  874044 out.go:374] Setting ErrFile to fd 2...
	I0929 13:31:04.819541  874044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:31:04.819759  874044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:31:04.820305  874044 out.go:368] Setting JSON to false
	I0929 13:31:04.821644  874044 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11610,"bootTime":1759141055,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:31:04.821772  874044 start.go:140] virtualization: kvm guest
	I0929 13:31:04.824427  874044 out.go:179] * [calico-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:31:04.826218  874044 notify.go:220] Checking for updates...
	I0929 13:31:04.826238  874044 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:31:04.827968  874044 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:31:04.829450  874044 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:31:04.831168  874044 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:31:04.833084  874044 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:31:04.834751  874044 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:31:04.836774  874044 config.go:182] Loaded profile config "default-k8s-diff-port-504443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:31:04.836873  874044 config.go:182] Loaded profile config "embed-certs-144376": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:31:04.836970  874044 config.go:182] Loaded profile config "kindnet-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:31:04.837093  874044 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:31:04.865416  874044 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:31:04.865507  874044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:31:04.929617  874044 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:31:04.916813487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:31:04.929737  874044 docker.go:318] overlay module found
	I0929 13:31:04.932119  874044 out.go:179] * Using the docker driver based on user configuration
	I0929 13:31:04.933665  874044 start.go:304] selected driver: docker
	I0929 13:31:04.933687  874044 start.go:924] validating driver "docker" against <nil>
	I0929 13:31:04.933703  874044 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:31:04.934536  874044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:31:04.995144  874044 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:31:04.984239334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:31:04.995349  874044 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:31:04.995634  874044 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:31:04.997876  874044 out.go:179] * Using Docker driver with root privileges
	I0929 13:31:04.999613  874044 cni.go:84] Creating CNI manager for "calico"
	I0929 13:31:04.999647  874044 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 13:31:04.999776  874044 start.go:348] cluster config:
	{Name:calico-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0929 13:31:05.002118  874044 out.go:179] * Starting "calico-411536" primary control-plane node in "calico-411536" cluster
	I0929 13:31:05.003851  874044 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 13:31:05.005542  874044 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:31:05.007105  874044 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:31:05.007169  874044 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 13:31:05.007168  874044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:31:05.007182  874044 cache.go:58] Caching tarball of preloaded images
	I0929 13:31:05.007292  874044 preload.go:172] Found /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 13:31:05.007304  874044 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 13:31:05.007438  874044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/config.json ...
	I0929 13:31:05.007463  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/config.json: {Name:mk94521e969185213641aff45d3f07c1e3232d99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:05.031694  874044 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:31:05.031717  874044 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:31:05.031744  874044 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:31:05.031771  874044 start.go:360] acquireMachinesLock for calico-411536: {Name:mkd7a7f2d915d8f4a83cbee35d8e7e73204aead6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:31:05.031914  874044 start.go:364] duration metric: took 117.998µs to acquireMachinesLock for "calico-411536"
	I0929 13:31:05.031951  874044 start.go:93] Provisioning new machine with config: &{Name:calico-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:31:05.032029  874044 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:31:05.034128  874044 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:31:05.034362  874044 start.go:159] libmachine.API.Create for "calico-411536" (driver="docker")
	I0929 13:31:05.034396  874044 client.go:168] LocalClient.Create starting
	I0929 13:31:05.034488  874044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem
	I0929 13:31:05.034519  874044 main.go:141] libmachine: Decoding PEM data...
	I0929 13:31:05.034533  874044 main.go:141] libmachine: Parsing certificate...
	I0929 13:31:05.034599  874044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem
	I0929 13:31:05.034639  874044 main.go:141] libmachine: Decoding PEM data...
	I0929 13:31:05.034657  874044 main.go:141] libmachine: Parsing certificate...
	I0929 13:31:05.035080  874044 cli_runner.go:164] Run: docker network inspect calico-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:31:05.053730  874044 cli_runner.go:211] docker network inspect calico-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:31:05.053822  874044 network_create.go:284] running [docker network inspect calico-411536] to gather additional debugging logs...
	I0929 13:31:05.053844  874044 cli_runner.go:164] Run: docker network inspect calico-411536
	W0929 13:31:05.073922  874044 cli_runner.go:211] docker network inspect calico-411536 returned with exit code 1
	I0929 13:31:05.073975  874044 network_create.go:287] error running [docker network inspect calico-411536]: docker network inspect calico-411536: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-411536 not found
	I0929 13:31:05.074012  874044 network_create.go:289] output of [docker network inspect calico-411536]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-411536 not found
	
	** /stderr **
	I0929 13:31:05.074272  874044 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:31:05.095101  874044 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-658937e2822f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:db:59:32:33:14} reservation:<nil>}
	I0929 13:31:05.096098  874044 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0aedf79fab3f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:00:40:22:c0:9d} reservation:<nil>}
	I0929 13:31:05.097144  874044 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4e6b729de02 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:90:ed:5e:c1:cf} reservation:<nil>}
	I0929 13:31:05.097909  874044 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f5b4e4a14093 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:71:86:c8:61:29} reservation:<nil>}
	I0929 13:31:05.099133  874044 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6a07eab15133 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:ea:a5:28:87:6b} reservation:<nil>}
	I0929 13:31:05.100466  874044 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8d12f94bd350 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:06:5c:a2:da:1c:42} reservation:<nil>}
	I0929 13:31:05.101329  874044 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c958a0}
	I0929 13:31:05.101357  874044 network_create.go:124] attempt to create docker network calico-411536 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0929 13:31:05.101421  874044 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-411536 calico-411536
	I0929 13:31:05.170061  874044 network_create.go:108] docker network calico-411536 192.168.103.0/24 created
	I0929 13:31:05.170109  874044 kic.go:121] calculated static IP "192.168.103.2" for the "calico-411536" container
	I0929 13:31:05.170254  874044 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:31:05.190142  874044 cli_runner.go:164] Run: docker volume create calico-411536 --label name.minikube.sigs.k8s.io=calico-411536 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:31:05.211767  874044 oci.go:103] Successfully created a docker volume calico-411536
	I0929 13:31:05.211865  874044 cli_runner.go:164] Run: docker run --rm --name calico-411536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-411536 --entrypoint /usr/bin/test -v calico-411536:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:31:05.646482  874044 oci.go:107] Successfully prepared a docker volume calico-411536
	I0929 13:31:05.646545  874044 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:31:05.646570  874044 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:31:05.646642  874044 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-411536:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:31:10.126246  874044 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-411536:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.479549394s)
	I0929 13:31:10.126279  874044 kic.go:203] duration metric: took 4.47970677s to extract preloaded images to volume ...
	W0929 13:31:10.126376  874044 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 13:31:10.126405  874044 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 13:31:10.126457  874044 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:31:10.185624  874044 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-411536 --name calico-411536 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-411536 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-411536 --network calico-411536 --ip 192.168.103.2 --volume calico-411536:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:31:10.494960  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Running}}
	I0929 13:31:10.516136  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:10.537282  874044 cli_runner.go:164] Run: docker exec calico-411536 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:31:10.593041  874044 oci.go:144] the created container "calico-411536" has a running status.
	I0929 13:31:10.593082  874044 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa...
	I0929 13:31:10.810398  874044 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:31:10.847141  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:10.869641  874044 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:31:10.869668  874044 kic_runner.go:114] Args: [docker exec --privileged calico-411536 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:31:10.920695  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:10.940813  874044 machine.go:93] provisionDockerMachine start ...
	I0929 13:31:10.940987  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:10.963857  874044 main.go:141] libmachine: Using SSH client type: native
	I0929 13:31:10.964266  874044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0929 13:31:10.964288  874044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:31:11.109165  874044 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-411536
	
	I0929 13:31:11.109199  874044 ubuntu.go:182] provisioning hostname "calico-411536"
	I0929 13:31:11.109283  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:11.129487  874044 main.go:141] libmachine: Using SSH client type: native
	I0929 13:31:11.129756  874044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0929 13:31:11.129778  874044 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-411536 && echo "calico-411536" | sudo tee /etc/hostname
	I0929 13:31:11.288403  874044 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-411536
	
	I0929 13:31:11.288540  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:11.309605  874044 main.go:141] libmachine: Using SSH client type: native
	I0929 13:31:11.309951  874044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0929 13:31:11.309983  874044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-411536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-411536/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-411536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:31:11.452434  874044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:31:11.452469  874044 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-564029/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-564029/.minikube}
	I0929 13:31:11.452519  874044 ubuntu.go:190] setting up certificates
	I0929 13:31:11.452541  874044 provision.go:84] configureAuth start
	I0929 13:31:11.452619  874044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-411536
	I0929 13:31:11.473942  874044 provision.go:143] copyHostCerts
	I0929 13:31:11.474019  874044 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem, removing ...
	I0929 13:31:11.474043  874044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem
	I0929 13:31:11.474132  874044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/ca.pem (1082 bytes)
	I0929 13:31:11.474439  874044 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem, removing ...
	I0929 13:31:11.474462  874044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem
	I0929 13:31:11.474522  874044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/cert.pem (1123 bytes)
	I0929 13:31:11.474611  874044 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem, removing ...
	I0929 13:31:11.474622  874044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem
	I0929 13:31:11.474652  874044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-564029/.minikube/key.pem (1675 bytes)
	I0929 13:31:11.474704  874044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem org=jenkins.calico-411536 san=[127.0.0.1 192.168.103.2 calico-411536 localhost minikube]
	I0929 13:31:11.943556  874044 provision.go:177] copyRemoteCerts
	I0929 13:31:11.943629  874044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:31:11.943678  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:11.964379  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:12.067231  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 13:31:12.100948  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:31:12.132571  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:31:12.161738  874044 provision.go:87] duration metric: took 709.177286ms to configureAuth
	I0929 13:31:12.161779  874044 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:31:12.161997  874044 config.go:182] Loaded profile config "calico-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:31:12.162134  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:12.182229  874044 main.go:141] libmachine: Using SSH client type: native
	I0929 13:31:12.182442  874044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0929 13:31:12.182458  874044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 13:31:12.445914  874044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 13:31:12.445946  874044 machine.go:96] duration metric: took 1.505101116s to provisionDockerMachine
	I0929 13:31:12.445960  874044 client.go:171] duration metric: took 7.411557666s to LocalClient.Create
	I0929 13:31:12.445984  874044 start.go:167] duration metric: took 7.411622138s to libmachine.API.Create "calico-411536"
	I0929 13:31:12.445996  874044 start.go:293] postStartSetup for "calico-411536" (driver="docker")
	I0929 13:31:12.446011  874044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:31:12.446090  874044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:31:12.446147  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:12.468280  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:12.572594  874044 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:31:12.576998  874044 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:31:12.577049  874044 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:31:12.577062  874044 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:31:12.577073  874044 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:31:12.577094  874044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/addons for local assets ...
	I0929 13:31:12.577171  874044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-564029/.minikube/files for local assets ...
	I0929 13:31:12.577310  874044 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem -> 5675162.pem in /etc/ssl/certs
	I0929 13:31:12.577469  874044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:31:12.590203  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:31:12.622091  874044 start.go:296] duration metric: took 176.076623ms for postStartSetup
	I0929 13:31:12.622517  874044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-411536
	I0929 13:31:12.642814  874044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/config.json ...
	I0929 13:31:12.643144  874044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:31:12.643194  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:12.663783  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:12.760029  874044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:31:12.765669  874044 start.go:128] duration metric: took 7.733617799s to createHost
	I0929 13:31:12.765700  874044 start.go:83] releasing machines lock for "calico-411536", held for 7.733769165s
	I0929 13:31:12.765791  874044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-411536
	I0929 13:31:12.785724  874044 ssh_runner.go:195] Run: cat /version.json
	I0929 13:31:12.785782  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:12.785786  874044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:31:12.785859  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:12.806838  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:12.807209  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:12.901850  874044 ssh_runner.go:195] Run: systemctl --version
	I0929 13:31:12.979633  874044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 13:31:13.127418  874044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:31:13.133411  874044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:31:13.161290  874044 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:31:13.161395  874044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:31:13.198268  874044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:31:13.198292  874044 start.go:495] detecting cgroup driver to use...
	I0929 13:31:13.198331  874044 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:31:13.198382  874044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:31:13.216498  874044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:31:13.230915  874044 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:31:13.230984  874044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:31:13.247879  874044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:31:13.265367  874044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:31:13.339223  874044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:31:13.418376  874044 docker.go:234] disabling docker service ...
	I0929 13:31:13.418448  874044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:31:13.439247  874044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:31:13.453359  874044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:31:13.526362  874044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:31:13.657471  874044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:31:13.672421  874044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:31:13.692776  874044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 13:31:13.692836  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.708286  874044 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 13:31:13.708349  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.721515  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.733518  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.744804  874044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:31:13.756041  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.768266  874044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.788220  874044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 13:31:13.801116  874044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:31:13.811251  874044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:31:13.821909  874044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:31:13.943672  874044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 13:31:14.045117  874044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 13:31:14.045199  874044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 13:31:14.050192  874044 start.go:563] Will wait 60s for crictl version
	I0929 13:31:14.050251  874044 ssh_runner.go:195] Run: which crictl
	I0929 13:31:14.054463  874044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:31:14.095500  874044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 13:31:14.095591  874044 ssh_runner.go:195] Run: crio --version
	I0929 13:31:14.139072  874044 ssh_runner.go:195] Run: crio --version
	I0929 13:31:14.183377  874044 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 13:31:14.184933  874044 cli_runner.go:164] Run: docker network inspect calico-411536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:31:14.203758  874044 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0929 13:31:14.208413  874044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:31:14.222337  874044 kubeadm.go:875] updating cluster {Name:calico-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:31:14.222478  874044 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 13:31:14.222526  874044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:31:14.303262  874044 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:31:14.303288  874044 crio.go:433] Images already preloaded, skipping extraction
	I0929 13:31:14.303339  874044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:31:14.343642  874044 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 13:31:14.343666  874044 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:31:14.343674  874044 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0929 13:31:14.343763  874044 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-411536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 13:31:14.343827  874044 ssh_runner.go:195] Run: crio config
	I0929 13:31:14.392781  874044 cni.go:84] Creating CNI manager for "calico"
	I0929 13:31:14.392813  874044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:31:14.392844  874044 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-411536 NodeName:calico-411536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:31:14.393034  874044 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-411536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:31:14.393121  874044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:31:14.405314  874044 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:31:14.405454  874044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:31:14.416307  874044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0929 13:31:14.438128  874044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:31:14.464744  874044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0929 13:31:14.486232  874044 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:31:14.490488  874044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:31:14.505808  874044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:31:14.582401  874044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:31:14.608955  874044 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536 for IP: 192.168.103.2
	I0929 13:31:14.608978  874044 certs.go:194] generating shared ca certs ...
	I0929 13:31:14.608994  874044 certs.go:226] acquiring lock for ca certs: {Name:mk60e93452ecdcb52b01b4859a7ad47bdc94500b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:14.609155  874044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key
	I0929 13:31:14.609193  874044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key
	I0929 13:31:14.609204  874044 certs.go:256] generating profile certs ...
	I0929 13:31:14.609273  874044 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.key
	I0929 13:31:14.609287  874044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.crt with IP's: []
	I0929 13:31:14.746177  874044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.crt ...
	I0929 13:31:14.746219  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.crt: {Name:mkd409bd5b04526ee422e22abf4e77bb0a63d3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:14.746433  874044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.key ...
	I0929 13:31:14.746449  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/client.key: {Name:mk54d56af7928291b58f08f51bcddf7aed28e2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:14.746552  874044 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key.b5291acf
	I0929 13:31:14.746573  874044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt.b5291acf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0929 13:31:15.109229  874044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt.b5291acf ...
	I0929 13:31:15.109268  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt.b5291acf: {Name:mkda6ec873ec820227728edcf10effc6ccb84556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:15.109458  874044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key.b5291acf ...
	I0929 13:31:15.109472  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key.b5291acf: {Name:mkd36d62d0c8e683c8208d5ffe400359cb46c857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:15.109574  874044 certs.go:381] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt.b5291acf -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt
	I0929 13:31:15.109671  874044 certs.go:385] copying /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key.b5291acf -> /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key
	I0929 13:31:15.109734  874044 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.key
	I0929 13:31:15.109753  874044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.crt with IP's: []
	I0929 13:31:15.875292  874044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.crt ...
	I0929 13:31:15.875324  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.crt: {Name:mk874d86b9ba4e29f8b511411519b7d88b554707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:15.875508  874044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.key ...
	I0929 13:31:15.875522  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.key: {Name:mk96380c4a5abf4d2ecc88d93bd01192cab5a634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:15.875713  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem (1338 bytes)
	W0929 13:31:15.875754  874044 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516_empty.pem, impossibly tiny 0 bytes
	I0929 13:31:15.875764  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 13:31:15.875788  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/ca.pem (1082 bytes)
	I0929 13:31:15.875820  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:31:15.875841  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/certs/key.pem (1675 bytes)
	I0929 13:31:15.875879  874044 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem (1708 bytes)
	I0929 13:31:15.876516  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:31:15.907031  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:31:15.938104  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:31:15.968954  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 13:31:15.999283  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 13:31:16.031435  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:31:16.061313  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:31:16.092818  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/calico-411536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:31:16.125568  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/ssl/certs/5675162.pem --> /usr/share/ca-certificates/5675162.pem (1708 bytes)
	I0929 13:31:16.160242  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:31:16.192261  874044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-564029/.minikube/certs/567516.pem --> /usr/share/ca-certificates/567516.pem (1338 bytes)
	I0929 13:31:16.224992  874044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:31:16.249184  874044 ssh_runner.go:195] Run: openssl version
	I0929 13:31:16.255815  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5675162.pem && ln -fs /usr/share/ca-certificates/5675162.pem /etc/ssl/certs/5675162.pem"
	I0929 13:31:16.267643  874044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5675162.pem
	I0929 13:31:16.272069  874044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:32 /usr/share/ca-certificates/5675162.pem
	I0929 13:31:16.272148  874044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5675162.pem
	I0929 13:31:16.280108  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5675162.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:31:16.291662  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:31:16.304211  874044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:31:16.308734  874044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:26 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:31:16.308793  874044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:31:16.317678  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:31:16.329518  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567516.pem && ln -fs /usr/share/ca-certificates/567516.pem /etc/ssl/certs/567516.pem"
	I0929 13:31:16.341309  874044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567516.pem
	I0929 13:31:16.345555  874044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:32 /usr/share/ca-certificates/567516.pem
	I0929 13:31:16.345642  874044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567516.pem
	I0929 13:31:16.354317  874044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567516.pem /etc/ssl/certs/51391683.0"
	I0929 13:31:16.366956  874044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:31:16.371441  874044 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:31:16.371514  874044 kubeadm.go:392] StartCluster: {Name:calico-411536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-411536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:31:16.371591  874044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 13:31:16.371643  874044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:31:16.412241  874044 cri.go:89] found id: ""
	I0929 13:31:16.412318  874044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:31:16.423248  874044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 13:31:16.433965  874044 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 13:31:16.434055  874044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 13:31:16.444303  874044 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 13:31:16.444327  874044 kubeadm.go:157] found existing configuration files:
	
	I0929 13:31:16.444383  874044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 13:31:16.454780  874044 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 13:31:16.454846  874044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 13:31:16.466154  874044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 13:31:16.478051  874044 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 13:31:16.478112  874044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 13:31:16.488176  874044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 13:31:16.499384  874044 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 13:31:16.499449  874044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 13:31:16.510677  874044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 13:31:16.521971  874044 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 13:31:16.522041  874044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 13:31:16.534483  874044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 13:31:16.579361  874044 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 13:31:16.579427  874044 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 13:31:16.598294  874044 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 13:31:16.598383  874044 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 13:31:16.598435  874044 kubeadm.go:310] OS: Linux
	I0929 13:31:16.598504  874044 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 13:31:16.598578  874044 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 13:31:16.598650  874044 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 13:31:16.598744  874044 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 13:31:16.598821  874044 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 13:31:16.598913  874044 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 13:31:16.598962  874044 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 13:31:16.599010  874044 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 13:31:16.662412  874044 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 13:31:16.662574  874044 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 13:31:16.662746  874044 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 13:31:16.669906  874044 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 13:31:16.673074  874044 out.go:252]   - Generating certificates and keys ...
	I0929 13:31:16.673206  874044 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 13:31:16.673292  874044 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 13:31:16.999348  874044 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 13:31:17.251564  874044 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 13:31:17.374299  874044 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 13:31:17.481384  874044 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 13:31:17.589632  874044 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 13:31:17.589778  874044 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-411536 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0929 13:31:17.803774  874044 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 13:31:17.803983  874044 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-411536 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0929 13:31:17.968824  874044 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 13:31:18.107926  874044 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 13:31:18.575054  874044 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 13:31:18.575145  874044 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 13:31:18.760453  874044 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 13:31:19.006387  874044 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 13:31:19.198627  874044 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 13:31:19.425981  874044 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 13:31:19.555997  874044 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 13:31:19.556618  874044 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 13:31:19.561131  874044 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 13:31:19.562928  874044 out.go:252]   - Booting up control plane ...
	I0929 13:31:19.563092  874044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 13:31:19.563201  874044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 13:31:19.563992  874044 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 13:31:19.575189  874044 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 13:31:19.575325  874044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 13:31:19.583761  874044 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 13:31:19.584208  874044 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 13:31:19.584284  874044 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 13:31:19.668195  874044 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 13:31:19.668357  874044 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 13:31:20.669711  874044 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001602367s
	I0929 13:31:20.672973  874044 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 13:31:20.673124  874044 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0929 13:31:20.673277  874044 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 13:31:20.673368  874044 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 13:31:21.977198  874044 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.30417683s
	I0929 13:31:22.862935  874044 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.189995689s
	I0929 13:31:24.675294  874044 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002367047s
	I0929 13:31:24.689323  874044 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 13:31:24.702140  874044 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 13:31:24.718500  874044 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 13:31:24.718812  874044 kubeadm.go:310] [mark-control-plane] Marking the node calico-411536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 13:31:24.729666  874044 kubeadm.go:310] [bootstrap-token] Using token: 4myh3j.tixakw04vtuaduax
	I0929 13:31:24.731076  874044 out.go:252]   - Configuring RBAC rules ...
	I0929 13:31:24.731240  874044 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 13:31:24.736569  874044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 13:31:24.744559  874044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 13:31:24.748509  874044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 13:31:24.751763  874044 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 13:31:24.755546  874044 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 13:31:25.082791  874044 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 13:31:25.500847  874044 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 13:31:26.082515  874044 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 13:31:26.083813  874044 kubeadm.go:310] 
	I0929 13:31:26.083916  874044 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 13:31:26.083928  874044 kubeadm.go:310] 
	I0929 13:31:26.084037  874044 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 13:31:26.084050  874044 kubeadm.go:310] 
	I0929 13:31:26.084091  874044 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 13:31:26.084169  874044 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 13:31:26.084254  874044 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 13:31:26.084266  874044 kubeadm.go:310] 
	I0929 13:31:26.084337  874044 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 13:31:26.084345  874044 kubeadm.go:310] 
	I0929 13:31:26.084408  874044 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 13:31:26.084418  874044 kubeadm.go:310] 
	I0929 13:31:26.084489  874044 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 13:31:26.084605  874044 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 13:31:26.084675  874044 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 13:31:26.084700  874044 kubeadm.go:310] 
	I0929 13:31:26.084834  874044 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 13:31:26.084968  874044 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 13:31:26.085006  874044 kubeadm.go:310] 
	I0929 13:31:26.085102  874044 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4myh3j.tixakw04vtuaduax \
	I0929 13:31:26.085314  874044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b \
	I0929 13:31:26.085358  874044 kubeadm.go:310] 	--control-plane 
	I0929 13:31:26.085365  874044 kubeadm.go:310] 
	I0929 13:31:26.085501  874044 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 13:31:26.085511  874044 kubeadm.go:310] 
	I0929 13:31:26.085627  874044 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4myh3j.tixakw04vtuaduax \
	I0929 13:31:26.085747  874044 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f1ec0d51bd56420112a465b09fe29ae9657dccabe3aeec1b36e508b960ed795b 
	I0929 13:31:26.088767  874044 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 13:31:26.088934  874044 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 13:31:26.088963  874044 cni.go:84] Creating CNI manager for "calico"
	I0929 13:31:26.094214  874044 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 13:31:26.096529  874044 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 13:31:26.096565  874044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 13:31:26.120805  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 13:31:27.077469  874044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 13:31:27.077564  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:27.077613  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-411536 minikube.k8s.io/updated_at=2025_09_29T13_31_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=calico-411536 minikube.k8s.io/primary=true
	I0929 13:31:27.152707  874044 ops.go:34] apiserver oom_adj: -16
	I0929 13:31:27.152923  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:27.653069  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:28.153087  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:28.653187  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:29.153091  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:29.653121  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:30.153520  874044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:31:30.230100  874044 kubeadm.go:1105] duration metric: took 3.152620368s to wait for elevateKubeSystemPrivileges
	I0929 13:31:30.230144  874044 kubeadm.go:394] duration metric: took 13.858635302s to StartCluster
	I0929 13:31:30.230170  874044 settings.go:142] acquiring lock: {Name:mkc0bfb4256c328f1d3eb97cbb227d0af47ae87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:30.230336  874044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:31:30.232583  874044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-564029/kubeconfig: {Name:mkc6d367e76af576e993b18825e9f7b1c511f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:31:30.232996  874044 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 13:31:30.233026  874044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 13:31:30.233127  874044 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:31:30.233234  874044 config.go:182] Loaded profile config "calico-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:31:30.233255  874044 addons.go:69] Setting storage-provisioner=true in profile "calico-411536"
	I0929 13:31:30.233274  874044 addons.go:238] Setting addon storage-provisioner=true in "calico-411536"
	I0929 13:31:30.233273  874044 addons.go:69] Setting default-storageclass=true in profile "calico-411536"
	I0929 13:31:30.233297  874044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-411536"
	I0929 13:31:30.233313  874044 host.go:66] Checking if "calico-411536" exists ...
	I0929 13:31:30.233732  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:30.233859  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:30.234842  874044 out.go:179] * Verifying Kubernetes components...
	I0929 13:31:30.236128  874044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:31:30.264637  874044 addons.go:238] Setting addon default-storageclass=true in "calico-411536"
	I0929 13:31:30.264699  874044 host.go:66] Checking if "calico-411536" exists ...
	I0929 13:31:30.265228  874044 cli_runner.go:164] Run: docker container inspect calico-411536 --format={{.State.Status}}
	I0929 13:31:30.266677  874044 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:31:30.271316  874044 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:31:30.271349  874044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:31:30.271433  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:30.297391  874044 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:31:30.297420  874044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:31:30.297485  874044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-411536
	I0929 13:31:30.301377  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:30.323987  874044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/calico-411536/id_rsa Username:docker}
	I0929 13:31:30.338733  874044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 13:31:30.375948  874044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:31:30.423846  874044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:31:30.447461  874044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:31:30.516723  874044 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0929 13:31:30.518625  874044 node_ready.go:35] waiting up to 15m0s for node "calico-411536" to be "Ready" ...
	I0929 13:31:30.763949  874044 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 13:31:30.765568  874044 addons.go:514] duration metric: took 532.459842ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 13:31:31.021260  874044 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-411536" context rescaled to 1 replicas
	W0929 13:31:32.524353  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:35.022163  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:37.522259  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:40.022623  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:42.521725  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:44.522018  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:46.522181  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:48.522266  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:50.522586  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:52.522658  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:55.022610  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:31:57.522636  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:00.022041  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:02.022095  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:04.022175  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:06.022912  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:08.521669  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:10.522040  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:13.023118  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:15.521855  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:18.022352  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:20.022482  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:22.522687  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:25.022661  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:27.023697  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:29.525221  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:31.526204  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:34.024856  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:36.522117  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:39.021703  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:41.022874  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:43.027743  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:45.521853  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:47.522931  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:50.021755  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:52.022460  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:54.024495  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:56.522314  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:32:59.022352  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:01.522083  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:03.523087  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:06.022446  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:08.521988  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:10.524461  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:13.022715  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:15.521951  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:17.522739  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:19.522920  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:22.021841  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:24.022299  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:26.022907  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:28.023949  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:30.522720  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:33.023580  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:35.521679  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:37.522676  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:40.022834  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:42.023224  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:44.521917  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:46.522039  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:48.522709  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:51.022368  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:53.522397  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:56.022104  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:33:58.022798  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:00.522865  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:02.523310  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:05.022143  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:07.022338  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:09.023415  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:11.522276  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:14.022034  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:16.024932  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:18.522135  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:21.024924  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:23.521896  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:26.022065  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:28.522010  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:30.522505  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:32.522701  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:34.523295  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:37.022768  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:39.522379  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:42.022783  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:44.522200  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:46.522326  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:49.022871  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:51.522382  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:53.522436  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:56.022952  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:34:58.522282  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:00.522466  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:02.522756  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:04.522986  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:07.022461  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:09.022935  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:11.522299  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:13.522571  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:16.022510  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:18.522537  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:21.022719  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:23.522715  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:26.022359  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:28.521976  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:30.522205  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:32.522295  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:34.522375  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:36.522500  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:39.022323  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:41.522592  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:44.022029  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:46.522653  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:48.522709  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:51.023087  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:53.521611  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:55.522394  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:35:58.022250  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:00.025683  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:02.521874  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:04.522020  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:06.522479  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:09.022767  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:11.522055  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:13.522345  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:15.522566  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:18.022299  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:20.022708  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:22.522198  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:25.022553  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:27.521822  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:29.522439  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:32.022033  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:34.522465  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:37.022244  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:39.522768  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:42.021875  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:44.522378  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:46.522502  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:48.522583  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:51.022423  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:53.521606  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:55.521767  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:57.522376  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:36:59.522427  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:02.021736  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:04.022536  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:06.521952  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:08.522356  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:11.022864  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:13.522297  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:16.022574  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:18.522472  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:21.022527  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:23.522040  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:25.522929  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:28.022524  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:30.522542  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:33.022460  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:35.521860  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:37.522029  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:39.522448  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:42.021759  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:44.021919  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:46.522196  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:49.021711  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:51.022733  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:53.522594  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:56.022226  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:37:58.521615  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:00.522182  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:03.022671  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:05.522010  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:07.522326  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:09.522460  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:12.021991  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:14.022819  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:16.522483  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:19.021743  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:21.022647  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:23.522350  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:26.022436  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:28.522292  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:31.022677  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:33.521798  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:36.022391  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:38.022753  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:40.522865  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:43.022377  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:45.521863  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:48.022068  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:50.022671  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:52.022849  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:54.522011  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:56.522451  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:38:59.021720  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:01.021941  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:03.022520  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:05.022827  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:07.522331  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:10.022442  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:12.522125  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:14.522823  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:17.021875  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:19.022251  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:21.522014  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:24.022670  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:26.522405  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:29.022818  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:31.521476  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:33.522490  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:36.021641  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:38.022731  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:40.521645  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:42.522678  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:45.022773  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:47.521766  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:49.522263  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:51.522580  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:54.022491  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:56.522588  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:39:59.022356  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:01.522104  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:04.022615  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:06.022673  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:08.522356  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:11.021842  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:13.522359  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:16.022500  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:18.522105  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:20.522200  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:23.022603  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:25.522430  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:27.522717  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:30.022589  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:32.022899  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:34.522088  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:36.522600  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:39.022001  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:41.522619  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:44.022041  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:46.022594  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:48.522476  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:51.022125  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:53.023022  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:55.522369  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:40:58.022608  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:00.523263  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:03.022597  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:05.522602  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:08.022727  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:10.522595  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:13.022664  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:15.522277  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:18.021871  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:20.022524  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:22.522006  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:24.522691  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:27.022498  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:29.521718  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:31.522419  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:34.022471  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:36.522210  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:38.522901  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:41.022009  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:43.022632  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:45.522057  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:48.022406  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:50.022679  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:52.522127  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:54.522387  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:57.022549  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:41:59.521612  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:01.522450  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:03.522609  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:06.022209  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:08.022549  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:10.521906  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:12.522741  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:15.022718  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:17.521646  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:19.521968  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:21.522684  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:24.022693  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:26.521698  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:29.022596  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:31.522238  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:33.522638  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:36.022595  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:38.522573  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:41.021894  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:43.022322  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:45.022772  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:47.522496  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:50.022066  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:52.022117  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:54.522766  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:57.021802  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:42:59.522051  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:01.522460  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:04.022234  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:06.022350  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:08.522586  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:10.522661  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:13.021996  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:15.022433  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:17.522522  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:20.022653  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:22.522859  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:25.022014  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:27.022093  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:29.022848  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:31.023086  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:33.521973  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:35.522314  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:38.021849  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:40.022756  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:42.521913  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:45.022532  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:47.521907  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:49.522210  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:52.022440  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:54.022584  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:56.022768  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:43:58.522369  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:01.022338  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:03.022406  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:05.022696  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:07.522125  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:10.022552  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:12.022844  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:14.522034  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:16.522605  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:18.522988  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:21.021804  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:23.021924  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:25.022791  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:27.521676  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:29.522266  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:32.022606  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:34.023360  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:36.522338  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:38.522394  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:40.522494  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:43.022207  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:45.522282  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:47.522717  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:50.022850  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:52.522197  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:55.022543  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:57.022637  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:44:59.521621  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:01.522210  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:04.022664  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:06.522663  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:09.022145  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:11.022415  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:13.523044  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:16.022635  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:18.522642  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:20.523031  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:23.022598  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:25.522296  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:27.522644  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:29.522756  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:32.022288  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:34.022839  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:36.521967  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:38.522032  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:40.522129  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:43.022063  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:45.022108  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:47.521968  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:50.022702  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:52.521777  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:54.522584  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:57.022386  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:45:59.522713  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:02.024324  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:04.522194  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:06.522285  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:09.022005  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:11.023059  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:13.522141  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:15.522213  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:18.022173  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:20.522745  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:23.022127  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:25.522218  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:27.522324  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	W0929 13:46:30.022056  874044 node_ready.go:57] node "calico-411536" has "Ready":"False" status (will retry)
	I0929 13:46:30.519867  874044 node_ready.go:38] duration metric: took 15m0.001195353s for node "calico-411536" to be "Ready" ...
	I0929 13:46:30.522199  874044 out.go:203] 
	W0929 13:46:30.523827  874044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0929 13:46:30.523853  874044 out.go:285] * 
	* 
	W0929 13:46:30.525656  874044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0929 13:46:30.527237  874044 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (925.80s)

                                                
                                    

Test pass (280/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.02
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.35
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.23
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 1.23
21 TestBinaryMirror 0.84
22 TestOffline 93.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 152.37
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.51
35 TestAddons/parallel/Registry 19.78
36 TestAddons/parallel/RegistryCreds 0.67
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.75
41 TestAddons/parallel/CSI 50.22
42 TestAddons/parallel/Headlamp 16.6
43 TestAddons/parallel/CloudSpanner 5.56
44 TestAddons/parallel/LocalPath 56.75
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
46 TestAddons/parallel/Yakd 11.85
47 TestAddons/parallel/AmdGpuDevicePlugin 6.54
48 TestAddons/StoppedEnableDisable 16.67
49 TestCertOptions 28.77
50 TestCertExpiration 212.11
52 TestForceSystemdFlag 29.08
53 TestForceSystemdEnv 28.14
55 TestKVMDriverInstallOrUpdate 0.91
59 TestErrorSpam/setup 21.55
60 TestErrorSpam/start 0.68
61 TestErrorSpam/status 0.97
62 TestErrorSpam/pause 1.55
63 TestErrorSpam/unpause 1.58
64 TestErrorSpam/stop 2.6
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.19
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.4
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.19
76 TestFunctional/serial/CacheCmd/cache/add_local 1.88
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 41.91
85 TestFunctional/serial/ComponentHealth 0.08
86 TestFunctional/serial/LogsCmd 1.59
87 TestFunctional/serial/LogsFileCmd 1.6
88 TestFunctional/serial/InvalidService 4.39
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 16.15
92 TestFunctional/parallel/DryRun 0.49
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.06
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.62
103 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/MySQL 19.21
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.88
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
114 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.53
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.15
123 TestFunctional/parallel/ImageCommands/Setup 1.59
124 TestFunctional/parallel/ProfileCmd/profile_list 0.46
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
130 TestFunctional/parallel/MountCmd/any-port 13.95
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.18
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.78
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
142 TestFunctional/parallel/MountCmd/specific-port 2
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 118.72
163 TestMultiControlPlane/serial/DeployApp 6.84
164 TestMultiControlPlane/serial/PingHostFromPods 1.19
165 TestMultiControlPlane/serial/AddWorkerNode 24.85
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.39
169 TestMultiControlPlane/serial/StopSecondaryNode 13.29
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.66
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.18
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.53
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 42.05
177 TestMultiControlPlane/serial/RestartCluster 59.97
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 67.49
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
184 TestJSONOutput/start/Command 40.85
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.76
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.66
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.95
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 29.68
210 TestKicCustomNetwork/use_default_bridge_network 23.43
211 TestKicExistingNetwork 24.88
212 TestKicCustomSubnet 26.67
213 TestKicStaticIP 26.4
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 50.44
218 TestMountStart/serial/StartWithMountFirst 6.57
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 5.77
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.7
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.82
226 TestMountStart/serial/VerifyMountPostStop 0.28
229 TestMultiNode/serial/FreshStart2Nodes 64.25
230 TestMultiNode/serial/DeployApp2Nodes 6.54
231 TestMultiNode/serial/PingHostFrom2Pods 0.87
232 TestMultiNode/serial/AddNode 24.44
233 TestMultiNode/serial/MultiNodeLabels 0.07
234 TestMultiNode/serial/ProfileList 0.68
235 TestMultiNode/serial/CopyFile 10.14
236 TestMultiNode/serial/StopNode 2.37
237 TestMultiNode/serial/StartAfterStop 7.54
238 TestMultiNode/serial/RestartKeepsNodes 82.09
239 TestMultiNode/serial/DeleteNode 5.43
240 TestMultiNode/serial/StopMultiNode 28.84
241 TestMultiNode/serial/RestartMultiNode 46.83
242 TestMultiNode/serial/ValidateNameConflict 23.78
247 TestPreload 119.63
249 TestScheduledStopUnix 97.72
252 TestInsufficientStorage 10.48
253 TestRunningBinaryUpgrade 56.66
255 TestKubernetesUpgrade 301.29
256 TestMissingContainerUpgrade 114.03
257 TestStoppedBinaryUpgrade/Setup 0.57
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
260 TestNoKubernetes/serial/StartWithK8s 42.37
261 TestStoppedBinaryUpgrade/Upgrade 72.11
262 TestNoKubernetes/serial/StartWithStopK8s 24.02
263 TestNoKubernetes/serial/Start 5.36
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
265 TestNoKubernetes/serial/ProfileList 1.59
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
267 TestNoKubernetes/serial/Stop 1.22
268 TestNoKubernetes/serial/StartNoArgs 7.95
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
278 TestPause/serial/Start 48.81
282 TestPause/serial/SecondStartNoReconfiguration 8.02
287 TestNetworkPlugins/group/false 3.85
291 TestPause/serial/Pause 0.72
292 TestPause/serial/VerifyStatus 0.34
293 TestPause/serial/Unpause 0.74
294 TestPause/serial/PauseAgain 0.83
295 TestPause/serial/DeletePaused 4.79
296 TestPause/serial/VerifyDeletedResources 19.32
298 TestStartStop/group/old-k8s-version/serial/FirstStart 53.93
300 TestStartStop/group/no-preload/serial/FirstStart 53.08
301 TestStartStop/group/old-k8s-version/serial/DeployApp 11.3
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
303 TestStartStop/group/old-k8s-version/serial/Stop 16.1
304 TestStartStop/group/no-preload/serial/DeployApp 12.26
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/old-k8s-version/serial/SecondStart 50.33
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
308 TestStartStop/group/no-preload/serial/Stop 16.27
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
310 TestStartStop/group/no-preload/serial/SecondStart 44.97
314 TestStartStop/group/embed-certs/serial/FirstStart 69.8
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.19
317 TestStartStop/group/embed-certs/serial/DeployApp 11.28
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
319 TestStartStop/group/embed-certs/serial/Stop 18.21
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.36
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 46.42
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.81
333 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
334 TestStartStop/group/old-k8s-version/serial/Pause 2.85
336 TestStartStop/group/newest-cni/serial/FirstStart 28.42
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
338 TestStartStop/group/no-preload/serial/Pause 2.85
339 TestNetworkPlugins/group/auto/Start 40.68
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
342 TestStartStop/group/newest-cni/serial/Stop 2.4
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
344 TestStartStop/group/newest-cni/serial/SecondStart 11.86
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
348 TestStartStop/group/newest-cni/serial/Pause 2.8
349 TestNetworkPlugins/group/kindnet/Start 70.14
350 TestNetworkPlugins/group/auto/KubeletFlags 0.3
351 TestNetworkPlugins/group/auto/NetCatPod 9.21
352 TestNetworkPlugins/group/auto/DNS 0.16
353 TestNetworkPlugins/group/auto/Localhost 0.14
354 TestNetworkPlugins/group/auto/HairPin 0.14
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
359 TestNetworkPlugins/group/kindnet/DNS 0.14
360 TestNetworkPlugins/group/kindnet/Localhost 0.12
361 TestNetworkPlugins/group/kindnet/HairPin 0.12
362 TestNetworkPlugins/group/custom-flannel/Start 51.21
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
364 TestStartStop/group/embed-certs/serial/Pause 2.86
365 TestNetworkPlugins/group/enable-default-cni/Start 62.66
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.46
368 TestNetworkPlugins/group/flannel/Start 52.95
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
371 TestNetworkPlugins/group/custom-flannel/DNS 0.15
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
374 TestNetworkPlugins/group/bridge/Start 60.5
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/flannel/NetCatPod 8.21
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
383 TestNetworkPlugins/group/flannel/DNS 0.15
384 TestNetworkPlugins/group/flannel/Localhost 0.13
385 TestNetworkPlugins/group/flannel/HairPin 0.13
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
387 TestNetworkPlugins/group/bridge/NetCatPod 9.19
388 TestNetworkPlugins/group/bridge/DNS 0.15
389 TestNetworkPlugins/group/bridge/Localhost 0.14
390 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-267819 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-267819 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.016732551s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 12:25:34.764558  567516 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 12:25:34.764665  567516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-267819
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-267819: exit status 85 (63.802273ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-267819 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-267819 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:25:29
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:25:29.791332  567528 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:25:29.791462  567528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:29.791471  567528 out.go:374] Setting ErrFile to fd 2...
	I0929 12:25:29.791475  567528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:29.791707  567528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	W0929 12:25:29.791871  567528 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21652-564029/.minikube/config/config.json: open /home/jenkins/minikube-integration/21652-564029/.minikube/config/config.json: no such file or directory
	I0929 12:25:29.792387  567528 out.go:368] Setting JSON to true
	I0929 12:25:29.793382  567528 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7675,"bootTime":1759141055,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:25:29.793488  567528 start.go:140] virtualization: kvm guest
	I0929 12:25:29.796080  567528 out.go:99] [download-only-267819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 12:25:29.796282  567528 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 12:25:29.796294  567528 notify.go:220] Checking for updates...
	I0929 12:25:29.797933  567528 out.go:171] MINIKUBE_LOCATION=21652
	I0929 12:25:29.799612  567528 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:25:29.801086  567528 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:25:29.802511  567528 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:25:29.804020  567528 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 12:25:29.806689  567528 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 12:25:29.807087  567528 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:25:29.832213  567528 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:25:29.832296  567528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:29.890049  567528 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:25:29.879410289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:29.890157  567528 docker.go:318] overlay module found
	I0929 12:25:29.892569  567528 out.go:99] Using the docker driver based on user configuration
	I0929 12:25:29.892611  567528 start.go:304] selected driver: docker
	I0929 12:25:29.892619  567528 start.go:924] validating driver "docker" against <nil>
	I0929 12:25:29.892754  567528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:29.954261  567528 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:25:29.944042067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:29.954449  567528 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:25:29.955072  567528 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 12:25:29.955351  567528 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 12:25:29.957546  567528 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-267819 host does not exist
	  To start a cluster, run: "minikube start -p download-only-267819"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-267819
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-463012 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-463012 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.349254283s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 12:25:39.545116  567516 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 12:25:39.545162  567516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-564029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-463012
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-463012: exit status 85 (70.513382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-267819 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-267819 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p download-only-267819                                                                                                                                                   │ download-only-267819 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ start   │ -o=json --download-only -p download-only-463012 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-463012 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:25:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:25:35.238941  567896 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:25:35.239200  567896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:35.239208  567896 out.go:374] Setting ErrFile to fd 2...
	I0929 12:25:35.239212  567896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:25:35.239436  567896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:25:35.239948  567896 out.go:368] Setting JSON to true
	I0929 12:25:35.240852  567896 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7680,"bootTime":1759141055,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:25:35.240967  567896 start.go:140] virtualization: kvm guest
	I0929 12:25:35.243211  567896 out.go:99] [download-only-463012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:25:35.243379  567896 notify.go:220] Checking for updates...
	I0929 12:25:35.245007  567896 out.go:171] MINIKUBE_LOCATION=21652
	I0929 12:25:35.246509  567896 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:25:35.248039  567896 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:25:35.252579  567896 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:25:35.253947  567896 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 12:25:35.256404  567896 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 12:25:35.256743  567896 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:25:35.280327  567896 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:25:35.280397  567896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:35.336373  567896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 12:25:35.326091364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:35.336470  567896 docker.go:318] overlay module found
	I0929 12:25:35.338340  567896 out.go:99] Using the docker driver based on user configuration
	I0929 12:25:35.338366  567896 start.go:304] selected driver: docker
	I0929 12:25:35.338372  567896 start.go:924] validating driver "docker" against <nil>
	I0929 12:25:35.338453  567896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:25:35.396200  567896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 12:25:35.385532613 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:25:35.396371  567896 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:25:35.396920  567896 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 12:25:35.397090  567896 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 12:25:35.399345  567896 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-463012 host does not exist
	  To start a cluster, run: "minikube start -p download-only-463012"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-463012
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.23s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-347304 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-347304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-347304
--- PASS: TestDownloadOnlyKic (1.23s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 12:25:41.518077  567516 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-475224 --alsologtostderr --binary-mirror http://127.0.0.1:40411 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-475224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-475224
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (93.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-244528 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-244528 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m30.389062298s)
helpers_test.go:175: Cleaning up "offline-crio-244528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-244528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-244528: (3.396681833s)
--- PASS: TestOffline (93.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-850167
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-850167: exit status 85 (57.486834ms)

                                                
                                                
-- stdout --
	* Profile "addons-850167" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850167"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-850167
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-850167: exit status 85 (58.331844ms)

                                                
                                                
-- stdout --
	* Profile "addons-850167" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850167"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (152.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-850167 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-850167 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.370639113s)
--- PASS: TestAddons/Setup (152.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-850167 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-850167 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-850167 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-850167 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e8dfbf8-8764-4b28-b2c8-95b6cca4c3e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e8dfbf8-8764-4b28-b2c8-95b6cca4c3e6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004441345s
addons_test.go:694: (dbg) Run:  kubectl --context addons-850167 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-850167 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-850167 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.841641ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-twx58" [b32cfa5b-9352-4010-90d8-297dfa02ac34] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003405143s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cmwmr" [46568cae-72ac-4e0a-ad3d-d04517b8a42d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003892223s
addons_test.go:392: (dbg) Run:  kubectl --context addons-850167 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-850167 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-850167 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.967726493s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 ip
2025/09/29 12:28:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.78s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.43694ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-850167
addons_test.go:332: (dbg) Run:  kubectl --context addons-850167 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pz2rt" [5a62736f-b4a7-4225-a42e-5e10208e932a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003713696s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.740527ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-nxj9b" [2569283a-cdf9-4200-ac06-2fdecd0a966d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004669084s
addons_test.go:463: (dbg) Run:  kubectl --context addons-850167 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 12:28:39.621950  567516 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 12:28:39.626414  567516 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 12:28:39.626446  567516 kapi.go:107] duration metric: took 4.524688ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.535015ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-850167 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-850167 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b4d03316-08cd-4a5d-82e1-22c813530b0d] Pending
helpers_test.go:352: "task-pv-pod" [b4d03316-08cd-4a5d-82e1-22c813530b0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b4d03316-08cd-4a5d-82e1-22c813530b0d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.055859239s
addons_test.go:572: (dbg) Run:  kubectl --context addons-850167 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-850167 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-850167 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-850167 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-850167 delete pod task-pv-pod: (1.022002518s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-850167 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-850167 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-850167 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b61683a7-2e99-4b05-b8dd-c9fda53ce96e] Pending
helpers_test.go:352: "task-pv-pod-restore" [b61683a7-2e99-4b05-b8dd-c9fda53ce96e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b61683a7-2e99-4b05-b8dd-c9fda53ce96e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004656409s
addons_test.go:614: (dbg) Run:  kubectl --context addons-850167 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-850167 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-850167 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.602356438s)
--- PASS: TestAddons/parallel/CSI (50.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-850167 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-x89b6" [5a51e739-a283-4b60-9de3-6a29cbac7a2a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-x89b6" [5a51e739-a283-4b60-9de3-6a29cbac7a2a] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004493377s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable headlamp --alsologtostderr -v=1: (5.790490837s)
--- PASS: TestAddons/parallel/Headlamp (16.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-m5hlh" [eab8b8ad-a68f-4109-b208-f7c377bff035] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004784054s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-850167 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-850167 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7a2b1d4c-9964-4a6e-a304-76211776eb47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7a2b1d4c-9964-4a6e-a304-76211776eb47] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7a2b1d4c-9964-4a6e-a304-76211776eb47] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003052375s
addons_test.go:967: (dbg) Run:  kubectl --context addons-850167 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 ssh "cat /opt/local-path-provisioner/pvc-0da0b4bd-5e36-413e-b23b-30cac371151a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-850167 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-850167 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.815796991s)
--- PASS: TestAddons/parallel/LocalPath (56.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jnqpv" [b92b4506-8165-4963-a5be-49561337f056] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003356749s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lcndx" [6bf16644-08f4-4ee7-abeb-3bf6ca454067] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004194663s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-850167 addons disable yakd --alsologtostderr -v=1: (5.848045879s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-dbl96" [683f8734-138a-4e25-9296-188f5ee6056f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003418794s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-850167
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-850167: (16.404021423s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-850167
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-850167
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-850167
--- PASS: TestAddons/StoppedEnableDisable (16.67s)

                                                
                                    
x
+
TestCertOptions (28.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-551828 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-551828 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.643398191s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-551828 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-551828 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-551828 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-551828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-551828
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-551828: (2.504758571s)
--- PASS: TestCertOptions (28.77s)

                                                
                                    
x
+
TestCertExpiration (212.11s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171552 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171552 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.552201395s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171552 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.072463217s)
helpers_test.go:175: Cleaning up "cert-expiration-171552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-171552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-171552: (2.489374964s)
--- PASS: TestCertExpiration (212.11s)

                                                
                                    
x
+
TestForceSystemdFlag (29.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-398130 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-398130 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.164000421s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-398130 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-398130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-398130
E0929 13:09:09.425074  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-398130: (3.607608679s)
--- PASS: TestForceSystemdFlag (29.08s)

                                                
                                    
x
+
TestForceSystemdEnv (28.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-189778 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-189778 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.610718903s)
helpers_test.go:175: Cleaning up "force-systemd-env-189778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-189778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-189778: (2.528863987s)
--- PASS: TestForceSystemdEnv (28.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 13:08:39.877210  567516 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 13:08:39.877389  567516 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3433578413/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 13:08:39.911044  567516 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3433578413/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 13:08:39.911091  567516 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 13:08:39.911214  567516 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 13:08:39.911276  567516 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3433578413/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (0.91s)

                                                
                                    
x
+
TestErrorSpam/setup (21.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-162409 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-162409 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-162409 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-162409 --driver=docker  --container-runtime=crio: (21.552603255s)
--- PASS: TestErrorSpam/setup (21.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 stop: (2.397154058s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-162409 --log_dir /tmp/nospam-162409 stop
--- PASS: TestErrorSpam/stop (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21652-564029/.minikube/files/etc/test/nested/copy/567516/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-253578 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.193420445s)
--- PASS: TestFunctional/serial/StartWithProxy (39.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 12:33:03.139953  567516 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-253578 --alsologtostderr -v=8: (7.399169774s)
functional_test.go:678: soft start took 7.399994738s for "functional-253578" cluster.
I0929 12:33:10.539579  567516 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (7.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-253578 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 cache add registry.k8s.io/pause:3.3: (1.179983047s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 cache add registry.k8s.io/pause:latest: (1.017642285s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-253578 /tmp/TestFunctionalserialCacheCmdcacheadd_local1502067060/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache add minikube-local-cache-test:functional-253578
E0929 12:33:15.385129  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:15.391701  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:15.403239  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:15.424749  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:15.466338  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:15.547852  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 cache add minikube-local-cache-test:functional-253578: (1.513512163s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache delete minikube-local-cache-test:functional-253578
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-253578
E0929 12:33:15.709451  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh sudo crictl images
E0929 12:33:16.031600  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0929 12:33:16.673086  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.70307ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0929 12:33:17.954674  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 kubectl -- --context functional-253578 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-253578 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 12:33:20.516500  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:25.638837  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:35.880717  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:56.362875  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-253578 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.905750735s)
functional_test.go:776: restart took 41.906041119s for "functional-253578" cluster.
I0929 12:34:00.219500  567516 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-253578 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 logs: (1.585295629s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 logs --file /tmp/TestFunctionalserialLogsFileCmd374247677/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 logs --file /tmp/TestFunctionalserialLogsFileCmd374247677/001/logs.txt: (1.603549983s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-253578 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-253578
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-253578: exit status 115 (359.156308ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30523 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-253578 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 config get cpus: exit status 14 (61.942981ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 config get cpus: exit status 14 (64.476791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-253578 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-253578 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 606287: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-253578 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.16777ms)

                                                
                                                
-- stdout --
	* [functional-253578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:34:12.241388  605781 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:34:12.241728  605781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.241738  605781 out.go:374] Setting ErrFile to fd 2...
	I0929 12:34:12.241744  605781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:12.242053  605781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:34:12.242734  605781 out.go:368] Setting JSON to false
	I0929 12:34:12.244327  605781 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8197,"bootTime":1759141055,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:34:12.244407  605781 start.go:140] virtualization: kvm guest
	I0929 12:34:12.247147  605781 out.go:179] * [functional-253578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:34:12.251362  605781 notify.go:220] Checking for updates...
	I0929 12:34:12.251426  605781 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:34:12.252979  605781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:34:12.254483  605781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:34:12.256895  605781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:34:12.259697  605781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:34:12.261499  605781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:34:12.265868  605781 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:34:12.266605  605781 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:34:12.307686  605781 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:34:12.307785  605781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:12.391809  605781 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 12:34:12.378416126 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:12.392029  605781 docker.go:318] overlay module found
	I0929 12:34:12.394076  605781 out.go:179] * Using the docker driver based on existing profile
	I0929 12:34:12.395710  605781 start.go:304] selected driver: docker
	I0929 12:34:12.395743  605781 start.go:924] validating driver "docker" against &{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:12.395930  605781 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:34:12.398082  605781 out.go:203] 
	W0929 12:34:12.399518  605781 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 12:34:12.403387  605781 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-253578 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-253578 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.965788ms)

                                                
                                                
-- stdout --
	* [functional-253578] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:34:10.505702  604468 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:34:10.505814  604468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:10.505821  604468 out.go:374] Setting ErrFile to fd 2...
	I0929 12:34:10.505827  604468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:34:10.506302  604468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:34:10.506941  604468 out.go:368] Setting JSON to false
	I0929 12:34:10.508302  604468 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8196,"bootTime":1759141055,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:34:10.508430  604468 start.go:140] virtualization: kvm guest
	I0929 12:34:10.510224  604468 out.go:179] * [functional-253578] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 12:34:10.511690  604468 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:34:10.511729  604468 notify.go:220] Checking for updates...
	I0929 12:34:10.513803  604468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:34:10.515115  604468 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 12:34:10.516350  604468 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 12:34:10.517537  604468 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:34:10.518723  604468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:34:10.520741  604468 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:34:10.521535  604468 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:34:10.551571  604468 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:34:10.551678  604468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:34:10.624855  604468 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-09-29 12:34:10.610226093 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:34:10.625023  604468 docker.go:318] overlay module found
	I0929 12:34:10.627537  604468 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 12:34:10.628653  604468 start.go:304] selected driver: docker
	I0929 12:34:10.628678  604468 start.go:924] validating driver "docker" against &{Name:functional-253578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-253578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:34:10.628801  604468 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:34:10.630750  604468 out.go:203] 
	W0929 12:34:10.632356  604468 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 12:34:10.633860  604468 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh -n functional-253578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cp functional-253578:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3992790239/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh -n functional-253578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh -n functional-253578 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-253578 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pqwk8" [20766298-53e8-4c1f-b421-5e2ef8c5ff5f] Pending
helpers_test.go:352: "mysql-5bb876957f-pqwk8" [20766298-53e8-4c1f-b421-5e2ef8c5ff5f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pqwk8" [20766298-53e8-4c1f-b421-5e2ef8c5ff5f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.004757974s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-253578 exec mysql-5bb876957f-pqwk8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-253578 exec mysql-5bb876957f-pqwk8 -- mysql -ppassword -e "show databases;": exit status 1 (143.165337ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 12:34:24.572676  567516 retry.go:31] will retry after 1.409258116s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-253578 exec mysql-5bb876957f-pqwk8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-253578 exec mysql-5bb876957f-pqwk8 -- mysql -ppassword -e "show databases;": exit status 1 (147.479069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 12:34:26.130479  567516 retry.go:31] will retry after 2.14723846s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-253578 exec mysql-5bb876957f-pqwk8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/567516/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /etc/test/nested/copy/567516/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/567516.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /etc/ssl/certs/567516.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/567516.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /usr/share/ca-certificates/567516.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5675162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /etc/ssl/certs/5675162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5675162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /usr/share/ca-certificates/5675162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-253578 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "sudo systemctl is-active docker": exit status 1 (321.478077ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "sudo systemctl is-active containerd": exit status 1 (312.806331ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-253578 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-253578
localhost/kicbase/echo-server:functional-253578
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-253578 image ls --format short --alsologtostderr:
I0929 12:40:18.847023  612209 out.go:360] Setting OutFile to fd 1 ...
I0929 12:40:18.847297  612209 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:18.847306  612209 out.go:374] Setting ErrFile to fd 2...
I0929 12:40:18.847310  612209 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:18.847499  612209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
I0929 12:40:18.848101  612209 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:18.848199  612209 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:18.848560  612209 cli_runner.go:164] Run: docker container inspect functional-253578 --format={{.State.Status}}
I0929 12:40:18.867990  612209 ssh_runner.go:195] Run: systemctl --version
I0929 12:40:18.868047  612209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253578
I0929 12:40:18.886809  612209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/functional-253578/id_rsa Username:docker}
I0929 12:40:18.981290  612209 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-253578 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-253578  │ d41bc7a1967bf │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-253578  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-253578  │ 4480ba3aaf090 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-253578 image ls --format table --alsologtostderr:
I0929 12:40:22.689903  612779 out.go:360] Setting OutFile to fd 1 ...
I0929 12:40:22.690279  612779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:22.690294  612779 out.go:374] Setting ErrFile to fd 2...
I0929 12:40:22.690298  612779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:22.690543  612779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
I0929 12:40:22.691260  612779 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:22.691365  612779 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:22.691831  612779 cli_runner.go:164] Run: docker container inspect functional-253578 --format={{.State.Status}}
I0929 12:40:22.712508  612779 ssh_runner.go:195] Run: systemctl --version
I0929 12:40:22.712569  612779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253578
I0929 12:40:22.732077  612779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/functional-253578/id_rsa Username:docker}
I0929 12:40:22.826624  612779 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-253578 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["regist
ry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"b079cc0313eefd7495068b35ffcf6a06cc75e08dcace4cf09bf40b4f1a927512","repoDigests":["docker.io/library/854b8501bfb92d3d2f5761e0a3b12514cec33c3095354452b6eadf1a55ec7aad-tmp@sha256:df81a0f0a867d126eaf9cce52de2570e57cb73369944d4fda6dbc99f038ba0c0"],"repoTags":[],"size":"1465612"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],
"size":"519571821"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66
822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"4480ba3aaf090ab6b1bfb0d3532aa67c9ee5c0e002221cc6060363d31584533e","repoDigests":["localhost/minikube-local-cache-test@sha256:9c9fee08ab6159944bf88c55805231b37c4a65d1c91430e8b411676a962f48a3"],"repoTags":["localhost/minikube-local-c
ache-test:functional-253578"],"size":"3330"},{"id":"d41bc7a1967bf8b33c2a60bb411f6370f67b0ba88f65c5d3a4e0a6198cebc16e","repoDigests":["localhost/my-image@sha256:71d167ff66b81b6948b720ff6b68615a13cabb2c27dcc18b0db9ca201d56e555"],"repoTags":["localhost/my-image:functional-253578"],"size":"1468194"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-cont
roller-manager:v1.34.0"],"size":"76004183"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kic
base/echo-server:functional-253578"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-253578 image ls --format json --alsologtostderr:
I0929 12:40:22.456351  612730 out.go:360] Setting OutFile to fd 1 ...
I0929 12:40:22.456479  612730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:22.456487  612730 out.go:374] Setting ErrFile to fd 2...
I0929 12:40:22.456492  612730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:22.456712  612730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
I0929 12:40:22.457434  612730 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:22.457528  612730 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:22.457943  612730 cli_runner.go:164] Run: docker container inspect functional-253578 --format={{.State.Status}}
I0929 12:40:22.477939  612730 ssh_runner.go:195] Run: systemctl --version
I0929 12:40:22.477999  612730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253578
I0929 12:40:22.499170  612730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/functional-253578/id_rsa Username:docker}
I0929 12:40:22.594497  612730 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-253578 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-253578
size: "4943877"
- id: 4480ba3aaf090ab6b1bfb0d3532aa67c9ee5c0e002221cc6060363d31584533e
repoDigests:
- localhost/minikube-local-cache-test@sha256:9c9fee08ab6159944bf88c55805231b37c4a65d1c91430e8b411676a962f48a3
repoTags:
- localhost/minikube-local-cache-test:functional-253578
size: "3330"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-253578 image ls --format yaml --alsologtostderr:
I0929 12:40:19.072219  612260 out.go:360] Setting OutFile to fd 1 ...
I0929 12:40:19.072501  612260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:19.072511  612260 out.go:374] Setting ErrFile to fd 2...
I0929 12:40:19.072516  612260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:19.072765  612260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
I0929 12:40:19.073514  612260 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:19.073623  612260 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:19.074071  612260 cli_runner.go:164] Run: docker container inspect functional-253578 --format={{.State.Status}}
I0929 12:40:19.093585  612260 ssh_runner.go:195] Run: systemctl --version
I0929 12:40:19.093640  612260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253578
I0929 12:40:19.113197  612260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/functional-253578/id_rsa Username:docker}
I0929 12:40:19.208838  612260 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh pgrep buildkitd: exit status 1 (271.25649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr: (2.641444129s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b079cc0313e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-253578
--> d41bc7a1967
Successfully tagged localhost/my-image:functional-253578
d41bc7a1967bf8b33c2a60bb411f6370f67b0ba88f65c5d3a4e0a6198cebc16e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-253578 image build -t localhost/my-image:functional-253578 testdata/build --alsologtostderr:
I0929 12:40:19.578133  612410 out.go:360] Setting OutFile to fd 1 ...
I0929 12:40:19.578303  612410 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:19.578318  612410 out.go:374] Setting ErrFile to fd 2...
I0929 12:40:19.578325  612410 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:40:19.578572  612410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
I0929 12:40:19.579287  612410 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:19.580136  612410 config.go:182] Loaded profile config "functional-253578": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 12:40:19.580560  612410 cli_runner.go:164] Run: docker container inspect functional-253578 --format={{.State.Status}}
I0929 12:40:19.600770  612410 ssh_runner.go:195] Run: systemctl --version
I0929 12:40:19.600827  612410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253578
I0929 12:40:19.620984  612410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/functional-253578/id_rsa Username:docker}
I0929 12:40:19.716438  612410 build_images.go:161] Building image from path: /tmp/build.2652663410.tar
I0929 12:40:19.716517  612410 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 12:40:19.727115  612410 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2652663410.tar
I0929 12:40:19.731421  612410 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2652663410.tar: stat -c "%s %y" /var/lib/minikube/build/build.2652663410.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2652663410.tar': No such file or directory
I0929 12:40:19.731455  612410 ssh_runner.go:362] scp /tmp/build.2652663410.tar --> /var/lib/minikube/build/build.2652663410.tar (3072 bytes)
I0929 12:40:19.761529  612410 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2652663410
I0929 12:40:19.773225  612410 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2652663410 -xf /var/lib/minikube/build/build.2652663410.tar
I0929 12:40:19.784866  612410 crio.go:315] Building image: /var/lib/minikube/build/build.2652663410
I0929 12:40:19.784987  612410 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-253578 /var/lib/minikube/build/build.2652663410 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 12:40:22.142838  612410 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-253578 /var/lib/minikube/build/build.2652663410 --cgroup-manager=cgroupfs: (2.357794861s)
I0929 12:40:22.142987  612410 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2652663410
I0929 12:40:22.153694  612410 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2652663410.tar
I0929 12:40:22.164007  612410 build_images.go:217] Built localhost/my-image:functional-253578 from /tmp/build.2652663410.tar
I0929 12:40:22.164058  612410 build_images.go:133] succeeded building to: functional-253578
I0929 12:40:22.164067  612410 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.566511464s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-253578
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "402.591567ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.413369ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "374.478253ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.349402ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image load --daemon kicbase/echo-server:functional-253578 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 image load --daemon kicbase/echo-server:functional-253578 --alsologtostderr: (1.280827346s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdany-port3854837290/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759149250642487661" to /tmp/TestFunctionalparallelMountCmdany-port3854837290/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759149250642487661" to /tmp/TestFunctionalparallelMountCmdany-port3854837290/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759149250642487661" to /tmp/TestFunctionalparallelMountCmdany-port3854837290/001/test-1759149250642487661
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.040692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:34:10.962936  567516 retry.go:31] will retry after 308.994803ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 12:34 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 12:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 12:34 test-1759149250642487661
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh cat /mount-9p/test-1759149250642487661
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-253578 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b87615d2-18bf-4d02-8ebe-ccb48a592f31] Pending
helpers_test.go:352: "busybox-mount" [b87615d2-18bf-4d02-8ebe-ccb48a592f31] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b87615d2-18bf-4d02-8ebe-ccb48a592f31] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b87615d2-18bf-4d02-8ebe-ccb48a592f31] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.00397914s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-253578 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdany-port3854837290/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image load --daemon kicbase/echo-server:functional-253578 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-253578
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image load --daemon kicbase/echo-server:functional-253578 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 image load --daemon kicbase/echo-server:functional-253578 --alsologtostderr: (2.198894868s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image save kicbase/echo-server:functional-253578 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 image save kicbase/echo-server:functional-253578 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.780282445s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image rm kicbase/echo-server:functional-253578 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-253578
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 image save --daemon kicbase/echo-server:functional-253578 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-253578
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 607378: os: process already finished
helpers_test.go:519: unable to terminate pid 607202: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdspecific-port4019347123/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.748243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:34:24.893671  567516 retry.go:31] will retry after 560.728737ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdspecific-port4019347123/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "sudo umount -f /mount-9p": exit status 1 (317.352247ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-253578 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdspecific-port4019347123/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T" /mount1: exit status 1 (365.737683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:34:26.951638  567516 retry.go:31] will retry after 658.885885ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-253578 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-253578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup535977322/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-253578 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 service list: (1.709868558s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-253578 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-253578 service list -o json: (1.723338579s)
functional_test.go:1504: Took "1.723463351s" to run "out/minikube-linux-amd64 -p functional-253578 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-253578
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-253578
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-253578
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (118.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m57.969937541s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (118.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 kubectl -- rollout status deployment/busybox: (4.509303389s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-2z4ts -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-hvc5q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-kc2tp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-2z4ts -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-hvc5q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-kc2tp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-2z4ts -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-hvc5q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-kc2tp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-2z4ts -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-2z4ts -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-hvc5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-hvc5q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-kc2tp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 kubectl -- exec busybox-7b57f96db7-kc2tp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 node add --alsologtostderr -v 5: (23.944874397s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-649703 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp testdata/cp-test.txt ha-649703:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2625135160/001/cp-test_ha-649703.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703:/home/docker/cp-test.txt ha-649703-m02:/home/docker/cp-test_ha-649703_ha-649703-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test_ha-649703_ha-649703-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703:/home/docker/cp-test.txt ha-649703-m03:/home/docker/cp-test_ha-649703_ha-649703-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test_ha-649703_ha-649703-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703:/home/docker/cp-test.txt ha-649703-m04:/home/docker/cp-test_ha-649703_ha-649703-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test_ha-649703_ha-649703-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp testdata/cp-test.txt ha-649703-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2625135160/001/cp-test_ha-649703-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m02:/home/docker/cp-test.txt ha-649703:/home/docker/cp-test_ha-649703-m02_ha-649703.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test_ha-649703-m02_ha-649703.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m02:/home/docker/cp-test.txt ha-649703-m03:/home/docker/cp-test_ha-649703-m02_ha-649703-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test_ha-649703-m02_ha-649703-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m02:/home/docker/cp-test.txt ha-649703-m04:/home/docker/cp-test_ha-649703-m02_ha-649703-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test_ha-649703-m02_ha-649703-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp testdata/cp-test.txt ha-649703-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2625135160/001/cp-test_ha-649703-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m03:/home/docker/cp-test.txt ha-649703:/home/docker/cp-test_ha-649703-m03_ha-649703.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test_ha-649703-m03_ha-649703.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m03:/home/docker/cp-test.txt ha-649703-m02:/home/docker/cp-test_ha-649703-m03_ha-649703-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test_ha-649703-m03_ha-649703-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m03:/home/docker/cp-test.txt ha-649703-m04:/home/docker/cp-test_ha-649703-m03_ha-649703-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test_ha-649703-m03_ha-649703-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp testdata/cp-test.txt ha-649703-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2625135160/001/cp-test_ha-649703-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m04:/home/docker/cp-test.txt ha-649703:/home/docker/cp-test_ha-649703-m04_ha-649703.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703 "sudo cat /home/docker/cp-test_ha-649703-m04_ha-649703.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m04:/home/docker/cp-test.txt ha-649703-m02:/home/docker/cp-test_ha-649703-m04_ha-649703-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m02 "sudo cat /home/docker/cp-test_ha-649703-m04_ha-649703-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 cp ha-649703-m04:/home/docker/cp-test.txt ha-649703-m03:/home/docker/cp-test_ha-649703-m04_ha-649703-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 ssh -n ha-649703-m03 "sudo cat /home/docker/cp-test_ha-649703-m04_ha-649703-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 node stop m02 --alsologtostderr -v 5: (12.574710271s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5: exit status 7 (713.443612ms)

                                                
                                                
-- stdout --
	ha-649703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-649703-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649703-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-649703-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:47:43.055401  637349 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:47:43.055519  637349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:47:43.055528  637349 out.go:374] Setting ErrFile to fd 2...
	I0929 12:47:43.055532  637349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:47:43.055792  637349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:47:43.056001  637349 out.go:368] Setting JSON to false
	I0929 12:47:43.056041  637349 mustload.go:65] Loading cluster: ha-649703
	I0929 12:47:43.056078  637349 notify.go:220] Checking for updates...
	I0929 12:47:43.056557  637349 config.go:182] Loaded profile config "ha-649703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:47:43.056579  637349 status.go:174] checking status of ha-649703 ...
	I0929 12:47:43.057162  637349 cli_runner.go:164] Run: docker container inspect ha-649703 --format={{.State.Status}}
	I0929 12:47:43.078191  637349 status.go:371] ha-649703 host status = "Running" (err=<nil>)
	I0929 12:47:43.078237  637349 host.go:66] Checking if "ha-649703" exists ...
	I0929 12:47:43.078634  637349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649703
	I0929 12:47:43.098442  637349 host.go:66] Checking if "ha-649703" exists ...
	I0929 12:47:43.098790  637349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:47:43.098861  637349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649703
	I0929 12:47:43.119991  637349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/ha-649703/id_rsa Username:docker}
	I0929 12:47:43.214897  637349 ssh_runner.go:195] Run: systemctl --version
	I0929 12:47:43.220060  637349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:47:43.234256  637349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:47:43.292603  637349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:47:43.28144961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:47:43.293245  637349 kubeconfig.go:125] found "ha-649703" server: "https://192.168.49.254:8443"
	I0929 12:47:43.293287  637349 api_server.go:166] Checking apiserver status ...
	I0929 12:47:43.293339  637349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:47:43.306581  637349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	W0929 12:47:43.318189  637349 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:47:43.318251  637349 ssh_runner.go:195] Run: ls
	I0929 12:47:43.323001  637349 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 12:47:43.327583  637349 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 12:47:43.327613  637349 status.go:463] ha-649703 apiserver status = Running (err=<nil>)
	I0929 12:47:43.327625  637349 status.go:176] ha-649703 status: &{Name:ha-649703 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:47:43.327642  637349 status.go:174] checking status of ha-649703-m02 ...
	I0929 12:47:43.327897  637349 cli_runner.go:164] Run: docker container inspect ha-649703-m02 --format={{.State.Status}}
	I0929 12:47:43.347415  637349 status.go:371] ha-649703-m02 host status = "Stopped" (err=<nil>)
	I0929 12:47:43.347440  637349 status.go:384] host is not running, skipping remaining checks
	I0929 12:47:43.347447  637349 status.go:176] ha-649703-m02 status: &{Name:ha-649703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:47:43.347468  637349 status.go:174] checking status of ha-649703-m03 ...
	I0929 12:47:43.347732  637349 cli_runner.go:164] Run: docker container inspect ha-649703-m03 --format={{.State.Status}}
	I0929 12:47:43.366497  637349 status.go:371] ha-649703-m03 host status = "Running" (err=<nil>)
	I0929 12:47:43.366525  637349 host.go:66] Checking if "ha-649703-m03" exists ...
	I0929 12:47:43.366800  637349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649703-m03
	I0929 12:47:43.386303  637349 host.go:66] Checking if "ha-649703-m03" exists ...
	I0929 12:47:43.386597  637349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:47:43.386652  637349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649703-m03
	I0929 12:47:43.406483  637349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/ha-649703-m03/id_rsa Username:docker}
	I0929 12:47:43.501941  637349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:47:43.515549  637349 kubeconfig.go:125] found "ha-649703" server: "https://192.168.49.254:8443"
	I0929 12:47:43.515586  637349 api_server.go:166] Checking apiserver status ...
	I0929 12:47:43.515630  637349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:47:43.527828  637349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0929 12:47:43.539101  637349 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:47:43.539183  637349 ssh_runner.go:195] Run: ls
	I0929 12:47:43.543270  637349 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 12:47:43.547827  637349 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 12:47:43.547862  637349 status.go:463] ha-649703-m03 apiserver status = Running (err=<nil>)
	I0929 12:47:43.547876  637349 status.go:176] ha-649703-m03 status: &{Name:ha-649703-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:47:43.547912  637349 status.go:174] checking status of ha-649703-m04 ...
	I0929 12:47:43.548203  637349 cli_runner.go:164] Run: docker container inspect ha-649703-m04 --format={{.State.Status}}
	I0929 12:47:43.567459  637349 status.go:371] ha-649703-m04 host status = "Running" (err=<nil>)
	I0929 12:47:43.567486  637349 host.go:66] Checking if "ha-649703-m04" exists ...
	I0929 12:47:43.567747  637349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649703-m04
	I0929 12:47:43.586448  637349 host.go:66] Checking if "ha-649703-m04" exists ...
	I0929 12:47:43.586736  637349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:47:43.586783  637349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649703-m04
	I0929 12:47:43.605697  637349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/ha-649703-m04/id_rsa Username:docker}
	I0929 12:47:43.700746  637349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:47:43.713768  637349 status.go:176] ha-649703-m04 status: &{Name:ha-649703-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 node start m02 --alsologtostderr -v 5: (8.656163686s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 stop --alsologtostderr -v 5
E0929 12:48:15.387353  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 stop --alsologtostderr -v 5: (48.67996377s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 start --wait true --alsologtostderr -v 5
E0929 12:49:09.425165  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.431815  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.443266  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.465766  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.507200  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.588703  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:09.751013  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:10.072855  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:10.714604  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:11.996738  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:14.559568  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:19.681431  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:29.922779  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:38.450195  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 start --wait true --alsologtostderr -v 5: (1m3.379862888s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node delete m03 --alsologtostderr -v 5
E0929 12:49:50.405171  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 node delete m03 --alsologtostderr -v 5: (10.671942297s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 stop --alsologtostderr -v 5
E0929 12:50:31.366719  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 stop --alsologtostderr -v 5: (41.925457157s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5: exit status 7 (119.613144ms)

                                                
                                                
-- stdout --
	ha-649703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649703-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649703-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:50:41.440176  653718 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:50:41.440589  653718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:50:41.440601  653718 out.go:374] Setting ErrFile to fd 2...
	I0929 12:50:41.440605  653718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:50:41.440815  653718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:50:41.441043  653718 out.go:368] Setting JSON to false
	I0929 12:50:41.441082  653718 mustload.go:65] Loading cluster: ha-649703
	I0929 12:50:41.441253  653718 notify.go:220] Checking for updates...
	I0929 12:50:41.441521  653718 config.go:182] Loaded profile config "ha-649703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:50:41.441544  653718 status.go:174] checking status of ha-649703 ...
	I0929 12:50:41.442117  653718 cli_runner.go:164] Run: docker container inspect ha-649703 --format={{.State.Status}}
	I0929 12:50:41.461590  653718 status.go:371] ha-649703 host status = "Stopped" (err=<nil>)
	I0929 12:50:41.461636  653718 status.go:384] host is not running, skipping remaining checks
	I0929 12:50:41.461653  653718 status.go:176] ha-649703 status: &{Name:ha-649703 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:50:41.461685  653718 status.go:174] checking status of ha-649703-m02 ...
	I0929 12:50:41.462029  653718 cli_runner.go:164] Run: docker container inspect ha-649703-m02 --format={{.State.Status}}
	I0929 12:50:41.481769  653718 status.go:371] ha-649703-m02 host status = "Stopped" (err=<nil>)
	I0929 12:50:41.481796  653718 status.go:384] host is not running, skipping remaining checks
	I0929 12:50:41.481803  653718 status.go:176] ha-649703-m02 status: &{Name:ha-649703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:50:41.481826  653718 status.go:174] checking status of ha-649703-m04 ...
	I0929 12:50:41.482120  653718 cli_runner.go:164] Run: docker container inspect ha-649703-m04 --format={{.State.Status}}
	I0929 12:50:41.502762  653718 status.go:371] ha-649703-m04 host status = "Stopped" (err=<nil>)
	I0929 12:50:41.502791  653718 status.go:384] host is not running, skipping remaining checks
	I0929 12:50:41.502798  653718 status.go:176] ha-649703-m04 status: &{Name:ha-649703-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (59.134182708s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 node add --control-plane --alsologtostderr -v 5
E0929 12:51:53.288864  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-649703 node add --control-plane --alsologtostderr -v 5: (1m6.590862856s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-649703 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-244987 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 12:53:15.385602  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-244987 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.849818035s)
--- PASS: TestJSONOutput/start/Command (40.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-244987 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-244987 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-244987 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-244987 --output=json --user=testUser: (7.948064164s)
--- PASS: TestJSONOutput/stop/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-931280 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-931280 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.340703ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c65798c-5e1c-4cf1-9680-80f74bca18a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-931280] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c7b005f-ac6a-4691-806d-025c3efcfc83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"cab61c3c-2fda-470a-886f-51f36e204496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2d43f538-3a0b-459e-8840-2892e27a2eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig"}}
	{"specversion":"1.0","id":"25bdbf7e-3fcf-4c84-aab1-ff21faccdafe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube"}}
	{"specversion":"1.0","id":"428739ef-a67a-4758-a630-d1b35ad2a804","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"305f5ac9-0a9e-43f7-a35d-3d0364d4fee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8b4f4769-361a-4ee2-ad16-81dd5120b1e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-931280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-931280
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-264756 --network=
E0929 12:54:09.427590  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-264756 --network=: (27.490785202s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-264756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-264756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-264756: (2.165330605s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-905438 --network=bridge
E0929 12:54:37.131152  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-905438 --network=bridge: (21.426883886s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-905438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-905438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-905438: (1.983018001s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                    
x
+
TestKicExistingNetwork (24.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 12:54:47.057009  567516 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 12:54:47.078128  567516 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 12:54:47.078233  567516 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 12:54:47.078264  567516 cli_runner.go:164] Run: docker network inspect existing-network
W0929 12:54:47.097654  567516 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 12:54:47.097692  567516 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 12:54:47.097723  567516 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 12:54:47.097857  567516 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 12:54:47.117016  567516 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-658937e2822f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:db:59:32:33:14} reservation:<nil>}
I0929 12:54:47.117439  567516 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004a72c0}
I0929 12:54:47.117474  567516 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 12:54:47.117534  567516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 12:54:47.178728  567516 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-754158 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-754158 --network=existing-network: (22.719207582s)
helpers_test.go:175: Cleaning up "existing-network-754158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-754158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-754158: (2.005683945s)
I0929 12:55:11.923494  567516 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.88s)

                                                
                                    
x
+
TestKicCustomSubnet (26.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-707544 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-707544 --subnet=192.168.60.0/24: (24.460907236s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-707544 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-707544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-707544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-707544: (2.190117384s)
--- PASS: TestKicCustomSubnet (26.67s)

                                                
                                    
x
+
TestKicStaticIP (26.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-537544 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-537544 --static-ip=192.168.200.200: (24.06019789s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-537544 ip
helpers_test.go:175: Cleaning up "static-ip-537544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-537544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-537544: (2.195342437s)
--- PASS: TestKicStaticIP (26.40s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-839389 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-839389 --driver=docker  --container-runtime=crio: (22.105735621s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-855901 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-855901 --driver=docker  --container-runtime=crio: (22.229936788s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-839389
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-855901
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-855901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-855901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-855901: (2.393055869s)
helpers_test.go:175: Cleaning up "first-839389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-839389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-839389: (2.431412456s)
--- PASS: TestMinikubeProfile (50.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-311949 --memory=3072 --mount-string /tmp/TestMountStartserial384018430/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-311949 --memory=3072 --mount-string /tmp/TestMountStartserial384018430/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.572943712s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-311949 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-328354 --memory=3072 --mount-string /tmp/TestMountStartserial384018430/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-328354 --memory=3072 --mount-string /tmp/TestMountStartserial384018430/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.765448706s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-328354 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-311949 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-311949 --alsologtostderr -v=5: (1.695496841s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-328354 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-328354
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-328354: (1.208898712s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-328354
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-328354: (6.816473586s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-328354 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-144597 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0929 12:58:15.385426  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-144597 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.7485084s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-144597 -- rollout status deployment/busybox: (4.959132258s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-8kvln -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-hgndb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-8kvln -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-hgndb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-8kvln -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-hgndb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-8kvln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-8kvln -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-hgndb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-144597 -- exec busybox-7b57f96db7-hgndb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-144597 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-144597 -v=5 --alsologtostderr: (23.78871669s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-144597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp testdata/cp-test.txt multinode-144597:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1263704100/001/cp-test_multinode-144597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597:/home/docker/cp-test.txt multinode-144597-m02:/home/docker/cp-test_multinode-144597_multinode-144597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test_multinode-144597_multinode-144597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597:/home/docker/cp-test.txt multinode-144597-m03:/home/docker/cp-test_multinode-144597_multinode-144597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test_multinode-144597_multinode-144597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp testdata/cp-test.txt multinode-144597-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1263704100/001/cp-test_multinode-144597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m02:/home/docker/cp-test.txt multinode-144597:/home/docker/cp-test_multinode-144597-m02_multinode-144597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test_multinode-144597-m02_multinode-144597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m02:/home/docker/cp-test.txt multinode-144597-m03:/home/docker/cp-test_multinode-144597-m02_multinode-144597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test_multinode-144597-m02_multinode-144597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp testdata/cp-test.txt multinode-144597-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1263704100/001/cp-test_multinode-144597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m03:/home/docker/cp-test.txt multinode-144597:/home/docker/cp-test_multinode-144597-m03_multinode-144597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597 "sudo cat /home/docker/cp-test_multinode-144597-m03_multinode-144597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 cp multinode-144597-m03:/home/docker/cp-test.txt multinode-144597-m02:/home/docker/cp-test_multinode-144597-m03_multinode-144597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 ssh -n multinode-144597-m02 "sudo cat /home/docker/cp-test_multinode-144597-m03_multinode-144597-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 node stop m03
E0929 12:59:09.425187  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-144597 node stop m03: (1.33573865s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-144597 status: exit status 7 (517.721977ms)

                                                
                                                
-- stdout --
	multinode-144597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144597-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144597-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr: exit status 7 (516.148765ms)

                                                
                                                
-- stdout --
	multinode-144597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144597-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144597-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:59:10.445834  716151 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:59:10.446110  716151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:59:10.446118  716151 out.go:374] Setting ErrFile to fd 2...
	I0929 12:59:10.446123  716151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:59:10.446348  716151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 12:59:10.446522  716151 out.go:368] Setting JSON to false
	I0929 12:59:10.446555  716151 mustload.go:65] Loading cluster: multinode-144597
	I0929 12:59:10.446696  716151 notify.go:220] Checking for updates...
	I0929 12:59:10.446980  716151 config.go:182] Loaded profile config "multinode-144597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:59:10.447003  716151 status.go:174] checking status of multinode-144597 ...
	I0929 12:59:10.447426  716151 cli_runner.go:164] Run: docker container inspect multinode-144597 --format={{.State.Status}}
	I0929 12:59:10.467139  716151 status.go:371] multinode-144597 host status = "Running" (err=<nil>)
	I0929 12:59:10.467171  716151 host.go:66] Checking if "multinode-144597" exists ...
	I0929 12:59:10.467445  716151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-144597
	I0929 12:59:10.488086  716151 host.go:66] Checking if "multinode-144597" exists ...
	I0929 12:59:10.488379  716151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:59:10.488418  716151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-144597
	I0929 12:59:10.508907  716151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/multinode-144597/id_rsa Username:docker}
	I0929 12:59:10.603785  716151 ssh_runner.go:195] Run: systemctl --version
	I0929 12:59:10.608809  716151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:59:10.622268  716151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:59:10.680058  716151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 12:59:10.669496943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:59:10.680722  716151 kubeconfig.go:125] found "multinode-144597" server: "https://192.168.67.2:8443"
	I0929 12:59:10.680768  716151 api_server.go:166] Checking apiserver status ...
	I0929 12:59:10.680813  716151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:59:10.694192  716151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup
	W0929 12:59:10.705724  716151 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:59:10.705788  716151 ssh_runner.go:195] Run: ls
	I0929 12:59:10.710198  716151 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 12:59:10.716598  716151 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 12:59:10.716627  716151 status.go:463] multinode-144597 apiserver status = Running (err=<nil>)
	I0929 12:59:10.716639  716151 status.go:176] multinode-144597 status: &{Name:multinode-144597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:59:10.716658  716151 status.go:174] checking status of multinode-144597-m02 ...
	I0929 12:59:10.716945  716151 cli_runner.go:164] Run: docker container inspect multinode-144597-m02 --format={{.State.Status}}
	I0929 12:59:10.736428  716151 status.go:371] multinode-144597-m02 host status = "Running" (err=<nil>)
	I0929 12:59:10.736474  716151 host.go:66] Checking if "multinode-144597-m02" exists ...
	I0929 12:59:10.736768  716151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-144597-m02
	I0929 12:59:10.756706  716151 host.go:66] Checking if "multinode-144597-m02" exists ...
	I0929 12:59:10.757145  716151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:59:10.757216  716151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-144597-m02
	I0929 12:59:10.775966  716151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21652-564029/.minikube/machines/multinode-144597-m02/id_rsa Username:docker}
	I0929 12:59:10.871857  716151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:59:10.885942  716151 status.go:176] multinode-144597-m02 status: &{Name:multinode-144597-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:59:10.885980  716151 status.go:174] checking status of multinode-144597-m03 ...
	I0929 12:59:10.886354  716151 cli_runner.go:164] Run: docker container inspect multinode-144597-m03 --format={{.State.Status}}
	I0929 12:59:10.906876  716151 status.go:371] multinode-144597-m03 host status = "Stopped" (err=<nil>)
	I0929 12:59:10.906918  716151 status.go:384] host is not running, skipping remaining checks
	I0929 12:59:10.906928  716151 status.go:176] multinode-144597-m03 status: &{Name:multinode-144597-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-144597 node start m03 -v=5 --alsologtostderr: (6.808737046s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-144597
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-144597
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-144597: (29.634642971s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-144597 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-144597 --wait=true -v=5 --alsologtostderr: (52.337832952s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-144597
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-144597 node delete m03: (4.790295654s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-144597 stop: (28.644482375s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-144597 status: exit status 7 (99.440801ms)

                                                
                                                
-- stdout --
	multinode-144597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144597-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr: exit status 7 (95.948638ms)

                                                
                                                
-- stdout --
	multinode-144597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144597-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:01:14.762132  726399 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:01:14.762258  726399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:14.762263  726399 out.go:374] Setting ErrFile to fd 2...
	I0929 13:01:14.762267  726399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:14.762470  726399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:01:14.762631  726399 out.go:368] Setting JSON to false
	I0929 13:01:14.762668  726399 mustload.go:65] Loading cluster: multinode-144597
	I0929 13:01:14.762821  726399 notify.go:220] Checking for updates...
	I0929 13:01:14.763156  726399 config.go:182] Loaded profile config "multinode-144597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:01:14.763181  726399 status.go:174] checking status of multinode-144597 ...
	I0929 13:01:14.763722  726399 cli_runner.go:164] Run: docker container inspect multinode-144597 --format={{.State.Status}}
	I0929 13:01:14.784992  726399 status.go:371] multinode-144597 host status = "Stopped" (err=<nil>)
	I0929 13:01:14.785037  726399 status.go:384] host is not running, skipping remaining checks
	I0929 13:01:14.785048  726399 status.go:176] multinode-144597 status: &{Name:multinode-144597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:01:14.785108  726399 status.go:174] checking status of multinode-144597-m02 ...
	I0929 13:01:14.785387  726399 cli_runner.go:164] Run: docker container inspect multinode-144597-m02 --format={{.State.Status}}
	I0929 13:01:14.807130  726399 status.go:371] multinode-144597-m02 host status = "Stopped" (err=<nil>)
	I0929 13:01:14.807154  726399 status.go:384] host is not running, skipping remaining checks
	I0929 13:01:14.807162  726399 status.go:176] multinode-144597-m02 status: &{Name:multinode-144597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-144597 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-144597 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.20072807s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-144597 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-144597
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-144597-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-144597-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.12329ms)

                                                
                                                
-- stdout --
	* [multinode-144597-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-144597-m02' is duplicated with machine name 'multinode-144597-m02' in profile 'multinode-144597'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-144597-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-144597-m03 --driver=docker  --container-runtime=crio: (20.934904468s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-144597
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-144597: exit status 80 (300.917692ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-144597 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-144597-m03 already exists in multinode-144597-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-144597-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-144597-m03: (2.419613282s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.78s)

                                                
                                    
x
+
TestPreload (119.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-641842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0929 13:03:15.385709  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-641842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.781459702s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-641842 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-641842 image pull gcr.io/k8s-minikube/busybox: (3.732383758s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-641842
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-641842: (6.04940908s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-641842 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0929 13:04:09.425114  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-641842 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.352339983s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-641842 image list
helpers_test.go:175: Cleaning up "test-preload-641842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-641842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-641842: (2.475184191s)
--- PASS: TestPreload (119.63s)

                                                
                                    
x
+
TestScheduledStopUnix (97.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-378926 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-378926 --memory=3072 --driver=docker  --container-runtime=crio: (21.493453801s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-378926 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-378926 -n scheduled-stop-378926
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-378926 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 13:04:51.287927  567516 retry.go:31] will retry after 71.125µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.289121  567516 retry.go:31] will retry after 154.447µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.290290  567516 retry.go:31] will retry after 155.404µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.291435  567516 retry.go:31] will retry after 406.135µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.292595  567516 retry.go:31] will retry after 365.352µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.293763  567516 retry.go:31] will retry after 495.305µs: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.294931  567516 retry.go:31] will retry after 1.621002ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.297161  567516 retry.go:31] will retry after 2.268494ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.300376  567516 retry.go:31] will retry after 3.060783ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.303533  567516 retry.go:31] will retry after 3.013772ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.306790  567516 retry.go:31] will retry after 3.867566ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.311027  567516 retry.go:31] will retry after 11.347846ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.323296  567516 retry.go:31] will retry after 14.284752ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.338587  567516 retry.go:31] will retry after 22.670813ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.361953  567516 retry.go:31] will retry after 15.609166ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
I0929 13:04:51.378283  567516 retry.go:31] will retry after 37.610681ms: open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/scheduled-stop-378926/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-378926 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-378926 -n scheduled-stop-378926
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-378926
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-378926 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0929 13:05:32.492552  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-378926
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-378926: exit status 7 (76.393346ms)

                                                
                                                
-- stdout --
	scheduled-stop-378926
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-378926 -n scheduled-stop-378926
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-378926 -n scheduled-stop-378926: exit status 7 (74.155588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-378926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-378926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-378926: (4.765828349s)
--- PASS: TestScheduledStopUnix (97.72s)

                                                
                                    
x
+
TestInsufficientStorage (10.48s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-947696 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-947696 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.985462273s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd25df2c-5b95-45df-9f20-de93f3af4454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-947696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd673db7-1e49-406a-8450-a7d6a33c5857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"f1e73a4c-f75a-4836-b378-acfe61f82b46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6849af71-9254-4f0f-b9d0-f554634514ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig"}}
	{"specversion":"1.0","id":"b73bd84c-b648-43fa-bc21-22ba00b058fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube"}}
	{"specversion":"1.0","id":"16005399-d945-4f31-b199-4bbc52e3a715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"50904f44-17c8-4a5d-acdc-caab055b6a8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8fb0d573-7d7c-4d77-bd46-859ce52245cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c59b2001-1579-4a62-8d2d-f97698feec82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3eb23df4-08c4-4547-b9cf-a268df59ff81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"455f7439-001c-412d-873d-9e7d3dfb9f37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"90689d11-485b-4002-82dc-ac4c0b0d1cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-947696\" primary control-plane node in \"insufficient-storage-947696\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a5ba8da-804f-4c63-baee-6267560f7d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"35a00003-b10b-4d56-8c5f-a912667e80cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7af041a4-eabe-44e1-ae28-79acdda100a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-947696 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-947696 --output=json --layout=cluster: exit status 7 (288.554719ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-947696","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-947696","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:06:15.342489  748465 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-947696" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-947696 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-947696 --output=json --layout=cluster: exit status 7 (286.165392ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-947696","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-947696","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:06:15.629427  748570 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-947696" does not appear in /home/jenkins/minikube-integration/21652-564029/kubeconfig
	E0929 13:06:15.641132  748570 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/insufficient-storage-947696/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-947696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-947696
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-947696: (1.915853464s)
--- PASS: TestInsufficientStorage (10.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3631089360 start -p running-upgrade-903009 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3631089360 start -p running-upgrade-903009 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.961720522s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-903009 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-903009 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.927555066s)
helpers_test.go:175: Cleaning up "running-upgrade-903009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-903009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-903009: (5.239840436s)
--- PASS: TestRunningBinaryUpgrade (56.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (301.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.352372885s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-300182
E0929 13:08:15.385221  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-300182: (3.326598955s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-300182 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-300182 status --format={{.Host}}: exit status 7 (88.975429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.151298721s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-300182 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (78.421093ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-300182] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-300182
	    minikube start -p kubernetes-upgrade-300182 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3001822 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-300182 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300182 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.466883288s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-300182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-300182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-300182: (2.754505674s)
--- PASS: TestKubernetesUpgrade (301.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (114.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2843693983 start -p missing-upgrade-304001 --memory=3072 --driver=docker  --container-runtime=crio
E0929 13:06:18.452308  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2843693983 start -p missing-upgrade-304001 --memory=3072 --driver=docker  --container-runtime=crio: (55.554757358s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-304001
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-304001: (10.474985323s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-304001
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-304001 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-304001 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.054214592s)
helpers_test.go:175: Cleaning up "missing-upgrade-304001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-304001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-304001: (2.193889085s)
--- PASS: TestMissingContainerUpgrade (114.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (82.533032ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-260886] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260886 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260886 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.003258844s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-260886 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2899380659 start -p stopped-upgrade-281858 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2899380659 start -p stopped-upgrade-281858 --memory=3072 --vm-driver=docker  --container-runtime=crio: (55.531283074s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2899380659 -p stopped-upgrade-281858 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2899380659 -p stopped-upgrade-281858 stop: (1.930354215s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-281858 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-281858 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.64310526s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.694024792s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-260886 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-260886 status -o json: exit status 2 (316.931138ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-260886","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-260886
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-260886: (2.008950616s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260886 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.362372451s)
--- PASS: TestNoKubernetes/serial/Start (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-260886 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-260886 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.327233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-281858
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-281858: (1.052534009s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-260886
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-260886: (1.216757922s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-260886 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-260886 --driver=docker  --container-runtime=crio: (7.945568068s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-260886 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-260886 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.263638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/Start (48.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-602565 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-602565 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.805455803s)
--- PASS: TestPause/serial/Start (48.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-602565 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-602565 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.006029754s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-411536 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-411536 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.685753ms)

                                                
                                                
-- stdout --
	* [false-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:08:35.364016  788191 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:08:35.364403  788191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:08:35.364413  788191 out.go:374] Setting ErrFile to fd 2...
	I0929 13:08:35.364418  788191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:08:35.364633  788191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-564029/.minikube/bin
	I0929 13:08:35.365213  788191 out.go:368] Setting JSON to false
	I0929 13:08:35.366711  788191 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10260,"bootTime":1759141055,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:08:35.366827  788191 start.go:140] virtualization: kvm guest
	I0929 13:08:35.368916  788191 out.go:179] * [false-411536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:08:35.370165  788191 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:08:35.370169  788191 notify.go:220] Checking for updates...
	I0929 13:08:35.371477  788191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:08:35.372737  788191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-564029/kubeconfig
	I0929 13:08:35.374015  788191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-564029/.minikube
	I0929 13:08:35.375179  788191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:08:35.376378  788191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:08:35.378290  788191 config.go:182] Loaded profile config "force-systemd-env-189778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:08:35.378455  788191 config.go:182] Loaded profile config "kubernetes-upgrade-300182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:08:35.378678  788191 config.go:182] Loaded profile config "pause-602565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 13:08:35.378811  788191 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:08:35.411598  788191 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:08:35.411724  788191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:08:35.481522  788191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-09-29 13:08:35.468932135 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:08:35.481648  788191 docker.go:318] overlay module found
	I0929 13:08:35.483648  788191 out.go:179] * Using the docker driver based on user configuration
	I0929 13:08:35.485155  788191 start.go:304] selected driver: docker
	I0929 13:08:35.485180  788191 start.go:924] validating driver "docker" against <nil>
	I0929 13:08:35.485199  788191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:08:35.487268  788191 out.go:203] 
	W0929 13:08:35.488477  788191 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 13:08:35.489587  788191 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-411536 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-env-189778
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-300182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-602565
contexts:
- context:
cluster: force-systemd-env-189778
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-env-189778
name: force-systemd-env-189778
- context:
cluster: kubernetes-upgrade-300182
user: kubernetes-upgrade-300182
name: kubernetes-upgrade-300182
- context:
cluster: pause-602565
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-602565
name: pause-602565
current-context: pause-602565
kind: Config
users:
- name: force-systemd-env-189778
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/force-systemd-env-189778/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/force-systemd-env-189778/client.key
- name: kubernetes-upgrade-300182
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.key
- name: pause-602565
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-411536

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411536"

                                                
                                                
----------------------- debugLogs end: false-411536 [took: 3.482864786s] --------------------------------
helpers_test.go:175: Cleaning up "false-411536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-411536
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-602565 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-602565 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-602565 --output=json --layout=cluster: exit status 2 (337.753182ms)

                                                
                                                
-- stdout --
	{"Name":"pause-602565","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-602565","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-602565 --alsologtostderr -v=5
I0929 13:08:40.628040  567516 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3433578413/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 13:08:40.643715  567516 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3433578413/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-602565 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-602565 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-602565 --alsologtostderr -v=5: (4.787534677s)
--- PASS: TestPause/serial/DeletePaused (4.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (19.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (19.261506651s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-602565
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-602565: exit status 1 (19.414028ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-602565: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (19.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.934645477s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (53.078595416s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-223488 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2c7cff0f-00d2-4b89-ad1f-f84529340473] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2c7cff0f-00d2-4b89-ad1f-f84529340473] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004028414s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-223488 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-223488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-223488 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-223488 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-223488 --alsologtostderr -v=3: (16.095456464s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-929827 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1b847593-10d1-4f57-8c13-c4c760bd8652] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1b847593-10d1-4f57-8c13-c4c760bd8652] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003232818s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-929827 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223488 -n old-k8s-version-223488: exit status 7 (73.217733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-223488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-223488 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.99921298s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-223488 -n old-k8s-version-223488
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-929827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-929827 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-929827 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-929827 --alsologtostderr -v=3: (16.272196176s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-929827 -n no-preload-929827: exit status 7 (72.666647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-929827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-929827 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (44.637773568s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-929827 -n no-preload-929827
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m9.802946779s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 13:13:15.385524  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/addons-850167/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (42.187327055s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-144376 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f4056a65-1e98-45fc-bf84-18d66f63281f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f4056a65-1e98-45fc-bf84-18d66f63281f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00401394s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-144376 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-144376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-144376 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-144376 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-144376 --alsologtostderr -v=3: (18.213574714s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9ef45a2a-f23f-4bf3-b518-970f01762dca] Pending
helpers_test.go:352: "busybox" [9ef45a2a-f23f-4bf3-b518-970f01762dca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9ef45a2a-f23f-4bf3-b518-970f01762dca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004412527s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-504443 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-504443 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-504443 --alsologtostderr -v=3: (16.35999236s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-144376 -n embed-certs-144376: exit status 7 (76.874094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-144376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-144376 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (46.077373623s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-144376 -n embed-certs-144376
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443: exit status 7 (87.946749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 13:14:09.426114  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/functional-253578/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504443 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (46.468924536s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-223488 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-223488 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488: exit status 2 (321.615909ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-223488 -n old-k8s-version-223488: exit status 2 (320.64299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-223488 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-223488 -n old-k8s-version-223488
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-223488 -n old-k8s-version-223488
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-597617 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-597617 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (28.424766261s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-929827 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-929827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827: exit status 2 (327.455652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-929827 -n no-preload-929827: exit status 2 (326.01195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-929827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-929827 -n no-preload-929827
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-929827 -n no-preload-929827
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.681875046s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-597617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-597617 --alsologtostderr -v=3
E0929 13:30:06.624426  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.630867  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.642367  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.663854  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.705960  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.787481  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:06.948824  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:07.271037  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:07.912836  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-597617 --alsologtostderr -v=3: (2.400817798s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-597617 -n newest-cni-597617
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-597617 -n newest-cni-597617: exit status 7 (80.612068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-597617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-597617 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 13:30:09.194800  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:11.756984  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:16.878679  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-597617 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (11.506842118s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-597617 -n newest-cni-597617
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-597617 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-597617 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-597617 -n newest-cni-597617
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-597617 -n newest-cni-597617: exit status 2 (320.264193ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-597617 -n newest-cni-597617
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-597617 -n newest-cni-597617: exit status 2 (318.588524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-597617 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-597617 -n newest-cni-597617
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-597617 -n newest-cni-597617
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0929 13:30:27.120549  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/old-k8s-version-223488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.288115  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.294610  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.306082  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.327601  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.369028  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.450476  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.612065  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:28.933854  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:29.575624  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:30.857310  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:30:33.418694  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.138957223s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-411536 "pgrep -a kubelet"
I0929 13:30:35.432953  567516 config.go:182] Loaded profile config "auto-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gl4ps" [33ed9686-80ab-49df-9ec6-33c200dc8a6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 13:30:38.541079  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gl4ps" [33ed9686-80ab-49df-9ec6-33c200dc8a6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003727604s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-zbdtr" [9be584cb-8839-45a5-9083-744348856ca0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00404268s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-411536 "pgrep -a kubelet"
I0929 13:31:42.769582  567516 config.go:182] Loaded profile config "kindnet-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h5zdr" [3b84e5f7-c40c-41cf-b878-7dfbf9d24eef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h5zdr" [3b84e5f7-c40c-41cf-b878-7dfbf9d24eef] Running
E0929 13:31:50.226585  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004239613s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.210051961s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-144376 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-144376 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376: exit status 2 (336.960124ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-144376 -n embed-certs-144376: exit status 2 (329.291054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-144376 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-144376 -n embed-certs-144376
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-144376 -n embed-certs-144376
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.656394878s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504443 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-504443 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443: exit status 2 (330.531585ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443: exit status 2 (333.918168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-504443 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-504443 --alsologtostderr -v=1: (1.015926812s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504443 -n default-k8s-diff-port-504443
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.950242437s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-411536 "pgrep -a kubelet"
I0929 13:33:03.373798  567516 config.go:182] Loaded profile config "custom-flannel-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-khlg6" [65b831c1-fcc2-4b4f-a772-f8b60850819c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-khlg6" [65b831c1-fcc2-4b4f-a772-f8b60850819c] Running
E0929 13:33:12.148049  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/no-preload-929827/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003441297s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0929 13:33:36.426816  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:33:37.709155  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:33:40.271452  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:33:45.393156  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-411536 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m0.496987493s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7vvzb" [ba26bcdd-40ab-4372-bdee-60fb5346715f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004389399s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-411536 "pgrep -a kubelet"
I0929 13:33:54.260649  567516 config.go:182] Loaded profile config "enable-default-cni-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6qtfp" [9c7d38b1-d159-4ab1-85b7-0e99d042a85c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 13:33:55.634757  567516 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/default-k8s-diff-port-504443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6qtfp" [9c7d38b1-d159-4ab1-85b7-0e99d042a85c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004575823s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-411536 "pgrep -a kubelet"
I0929 13:34:00.153748  567516 config.go:182] Loaded profile config "flannel-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bcb6q" [65914150-90c5-4769-8cb9-5bc65a57218c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bcb6q" [65914150-90c5-4769-8cb9-5bc65a57218c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004311324s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-411536 "pgrep -a kubelet"
I0929 13:34:36.946627  567516 config.go:182] Loaded profile config "bridge-411536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-411536 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tz4qn" [3d2a9be0-ca1d-4ba7-82a4-42019f5e1aa0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tz4qn" [3d2a9be0-ca1d-4ba7-82a4-42019f5e1aa0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003848644s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-411536 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-411536 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-850167 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-707559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-707559
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-411536 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-300182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-602565
contexts:
- context:
cluster: kubernetes-upgrade-300182
user: kubernetes-upgrade-300182
name: kubernetes-upgrade-300182
- context:
cluster: pause-602565
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-602565
name: pause-602565
current-context: kubernetes-upgrade-300182
kind: Config
users:
- name: kubernetes-upgrade-300182
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.key
- name: pause-602565
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-411536

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411536"

                                                
                                                
----------------------- debugLogs end: kubenet-411536 [took: 4.64601689s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-411536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-411536
--- SKIP: TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-411536 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-411536" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-300182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-564029/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-602565
contexts:
- context:
cluster: kubernetes-upgrade-300182
user: kubernetes-upgrade-300182
name: kubernetes-upgrade-300182
- context:
cluster: pause-602565
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-602565
name: pause-602565
current-context: pause-602565
kind: Config
users:
- name: kubernetes-upgrade-300182
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/kubernetes-upgrade-300182/client.key
- name: pause-602565
user:
client-certificate: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.crt
client-key: /home/jenkins/minikube-integration/21652-564029/.minikube/profiles/pause-602565/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-411536

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-411536" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411536"

                                                
                                                
----------------------- debugLogs end: cilium-411536 [took: 4.114152432s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-411536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-411536
--- SKIP: TestNetworkPlugins/group/cilium (4.29s)

                                                
                                    
Copied to clipboard